Final Answer:
- 1. The variance of the Maximum Likelihood Estimator (MLE) is 1/n.
- 2. The estimator Y = (X1 + X2)/2 is an unbiased estimator.
- 3. MLE is preferable to the estimator Y as it achieves lower variance, enhancing precision in estimation.
Step-by-step explanation:
1. To demonstrate the variance of the Maximum Likelihood Estimator (MLE) for θ in the normal distribution N(θ, 1), one can use the property that the variance of the MLE equals 1/information, where the information is the expected value of the second derivative of the log-likelihood function. For a sample of size n from N(θ, 1), the variance of the MLE is 1/n.
2. The estimator Y = (X1 + X2)/2, which computes the average of the first two observations, is unbiased for estimating the mean of a normal distribution. The expected value of Y equals θ, making it unbiased as it provides estimates centered around the true parameter value.
3. While both the MLE and the estimator Y are unbiased for estimating the mean of a normal distribution, the MLE achieves lower variance, making it more efficient. Even though both estimators are unbiased, the MLE tends to produce estimates closer to the true parameter value more consistently across different samples, exhibiting better precision due to its lower variance. This increased precision makes MLE more favorable for estimation purposes compared to Y, especially when considering larger samples where the MLE's efficiency becomes more pronounced.