Final answer:
Task parallelism and data parallelism are two strategies used in parallel computing to improve performance. (option 1)
Step-by-step explanation:
Task parallelism and data parallelism are both strategies used in parallel computing to improve performance. Task parallelism involves dividing a task into smaller subtasks that can be executed independently, while data parallelism involves dividing the data into smaller subsets that can be processed simultaneously. To determine if a problem exhibits task or data parallelism, we need to analyze how the tasks and data are distributed and processed.
Examples:
A problem that involves performing multiple independent calculations on different sets of data can exhibit task parallelism. For example, if we need to calculate the square of each element in a list of numbers, we can assign different threads or processes to perform the calculations in parallel.
A problem that involves performing the same computation on different parts of a large dataset can exhibit data parallelism. For example, if we need to multiply each element in a matrix by a scalar, we can divide the matrix into smaller chunks and assign different threads or processes to perform the multiplication in parallel.