Answer:
To find the least square polynomial approximation of degree two to the given data, we can use the method of least squares. This involves finding the values of a, b, and c that minimize the sum of the squared errors between the predicted values and the actual values.
The formula for the predicted value of y, based on the given polynomial, is:
y_pred = a + bx + cx²
Using the given data, we can form a system of equations to solve for a, b, and c:
For x = 1, y = -1: -1 = a + b(1) + c(1)²
For x = 2, y = 0: 0 = a + b(2) + c(2)²
For x = 3, y = 11: 11 = a + b(3) + c(3)²
For x = 4, y = 20: 20 = a + b(4) + c(4)²
We can rewrite this system of equations in matrix form as follows:
⎡ 1 1 1 ⎤ ⎡a⎤ ⎡-1⎤ ⎢ 1 2 4 ⎥ ⎢b⎥ ⎢ 0⎥ ⎢ 1 3 9 ⎥ ⎢c⎥ = ⎢11⎥ ⎣ 1 4 16 ⎦ ⎣ ⎦ ⎣20⎦
Solving for a, b, and c using the method of least squares, we get:
a ≈ -3.5 b ≈ 6.7 c ≈ -1.4
Therefore, the least square polynomial approximation of degree two to the given data is:
y ≈ -3.5 + 6.7x - 1.4x²
To find the least error, we can calculate the sum of the squared errors between the predicted values and the actual values:
error² = (y_pred - y_actual)²
Summing over all four data points, we get:
error² = (-1 - (-3.5))² + (0 - (-1.2))² + (11 - 7.1)² + (20 - 22.8)²
Explanation: