Answer:
See proof below.
Explanation:
If we assume the following linear model:
![y = \beta_o + \beta_1 X +\epsilon](https://img.qammunity.org/2021/formulas/mathematics/college/xh9xcaxvf5i8t3isulkqe7fq1rly8wfma4.png)
And if we have n sets of paired observations
the model can be written like this:
![y_i = \beta_o +\beta_1 x_i + \epsilon_i , i =1,2,...,n](https://img.qammunity.org/2021/formulas/mathematics/college/k824oz9nhsob6r9mxmt1bwf0ecnh8rhmm6.png)
And using the least squares procedure gives to us the following least squares estimates
for
and
for
:
![b_o = \bar y - b_1 \bar x](https://img.qammunity.org/2021/formulas/mathematics/college/2ccuyuduhnus92nmbzjxr5sxa1nzefsc7c.png)
![b_1 = (s_(xy))/(s_xx)](https://img.qammunity.org/2021/formulas/mathematics/college/fhpk2q70qhl6e4wlw0z373uajprsbairiq.png)
Where:
![s_(xy) =\sum_(i=1)^n (x_i -\bar x) (y-\bar y)](https://img.qammunity.org/2021/formulas/mathematics/college/hejjz4s7fdov02lvedu9uwd81kx05z2ri3.png)
![s_(xx) =\sum_(i=1)^n (x_i -\bar x)^2](https://img.qammunity.org/2021/formulas/mathematics/college/yheb8j0vos5md0ozzd3lb6rvpktoita95l.png)
Then
is a random variable and the estimated value is
. We can express this estimator like this:
![b_1 = \sum_(i=1)^n a_i y_i](https://img.qammunity.org/2021/formulas/mathematics/college/ejyz1shngnuj3i4q47u6eqijv8wxvub5hv.png)
Where
and if we see careful we notice that
and
![\sum_(i=1)^n a_i x_i =1](https://img.qammunity.org/2021/formulas/mathematics/college/hhfes0o4za8vv31nef9dx1cigmfa7n963b.png)
So then when we find the expected value we got:
![E(b_1) = \sum_(i=1)^n a_i E(y_i)](https://img.qammunity.org/2021/formulas/mathematics/college/ka055se6rxexxxyr4py8pstsbcw50p1alk.png)
![E(b_1) = \sum_(i=1)^n a_i (\beta_o +\beta_1 x_i)](https://img.qammunity.org/2021/formulas/mathematics/college/dq4yqi5ya5ywrubfzr3i37dj40pw6kjp0h.png)
![E(b_1) = \sum_(i=1)^n a_i \beta_o + \beta_1 a_i x_i](https://img.qammunity.org/2021/formulas/mathematics/college/lor7z71wn3hkysp6bui7ylk8o4de10qjur.png)
![E(b_1) = \beta_1 \sum_(i=1)^n a_i x_i = \beta_1](https://img.qammunity.org/2021/formulas/mathematics/college/tov0as43qzh2mqfoysanhevqk4vmft7nrx.png)
And as we can see
is an unbiased estimator for
![\beta_1](https://img.qammunity.org/2021/formulas/mathematics/college/z7pkurhg0hluj5uchwi6ukij9sl34y6rji.png)
In order to find the variance for the estimator
we have this:
![Var(b_1) = \sum_(i=1)^n a_i^2 Var(y_i) +\sum_i \sum_(j \\eq i) a_i a_j Cov (y_i, y_j)](https://img.qammunity.org/2021/formulas/mathematics/college/xlpht8od0utoywv997dwridzpjwxuhh0rg.png)
And we can assume that
since the observations are assumed independent, then we have this:
![Var (b_1) =\sigma^2 (\sum_(i=1)^n (x_i -\bar x)^2)/(s^2_(xx))](https://img.qammunity.org/2021/formulas/mathematics/college/3biinbccu48aj43tq33kqzdq82x8y3yxer.png)
And if we simplify we got:
![Var(b_1) = (\sigma^2 s_(xx))/(s^2_(xx)) = (\sigma^2)/(s_(xx))](https://img.qammunity.org/2021/formulas/mathematics/college/nsfy5lcy5dlw8psjn47umcia6ku1hjetsz.png)
And with this we complete the proof required.