Final answer:
1) MLE for θ in normal distribution is the sample mean (X).
(2) Variance of MLE is 1/n.
(3) Y = (X1 + X2)/2 is an unbiased estimator for θ.
(4) MLE is preferred over Y due to its smaller variance, making it more efficient.
Step-by-step explanation:
(1) To find the maximum likelihood estimator of θ, we need to maximize the likelihood function L(θ) which is the product of the probability density function of each Xi given θ. For the normal distribution, this is given by:
L(θ) = (1/√(2π))^n * exp(-1/2 * Σ(xi - θ)^2)
To maximize this function, we take the derivative with respect to θ and set it to zero:
d/dθ [L(θ)] = 0
-1/2 * Σ(-2(xi - θ)) = 0
Σ(xi - θ) = 0
θ = Σxi / n
Therefore, the maximum likelihood estimator of θ is the sample mean X.
(2) To find the variance of the MLE, we can use the fact that for a sample from a normal distribution, the sample mean has variance σ^2/n, where σ^2 is the population variance.
Since the population variance is 1, the variance of the MLE is 1/n.
(3) To show that Y is an unbiased estimator, we need to show that E(Y) = θ.
E(Y) = E((X_1 + X_2)/2)
= (E(X1) + E(X2))/2
= (θ + θ)/2 = θ
So Y is an unbiased estimator.
(4) The MLE is better than Y even though both estimators are unbiased because the MLE has a smaller variance. In general, the MLE is known to have the smallest variance among all unbiased estimators, which makes it more efficient in estimating the true parameter value.
Therefore, even if both estimators are unbiased, the MLE is preferred due to its lower variance.