72.9k views
0 votes
3. (6 pts) Suppose \( X_{i} \sim N\left(0, a_{i} \theta\right) \) independently for \( i=1,2, \ldots, n \) where \( a_{i}(>0) \) are fixed and known constants for all \( i \). Find the MLE of \( \thet

User Tzim
by
7.9k points

1 Answer

2 votes

Final Answer:

The maximum likelihood estimator
(MLE) of \( \theta \) is \( \hat{\theta} = (n)/(\sum_(i=1)^(n)a_(i)) \).

Step-by-step explanation:

The given scenario describes independent random variables
\( X_(i) \)with a normal distribution
\( N(0, a_(i)\theta) \)for
\( i=1,2,\ldots,n \). Here,
\( a_(i) \) are known constants.

The probability density function (PDF) of a normal distribution is
\( f(x) = (1)/(√(2\pi)\sigma)e^{-(1)/(2)\left((x-\mu)/(\sigma)\right)^(2)} \), where
\( \mu \)is the mean and
\( \sigma \)is the standard deviation.

In the given case,
\( X_(i) \) follows
\( N(0, a_(i)\theta) \), so the PDF is
\( f(x) = \frac{1}{\sqrt{2\pi a_(i)\theta}}e^{-(1)/(2)\left(\frac{x}{\sqrt{a_(i)\theta}}\right)^(2)} \).

The likelihood function is the product of the PDFs of all
\( X_(i) \)'s, given by
\(
L(\theta) = \prod_(i=1)^(n)f(x_(i)) \).

To find the MLE, we maximize the likelihood function with respect to
\( \theta \), which is equivalent to maximizing the log-likelihood function. After some algebraic manipulations, the MLE of
\( \theta \)is found to be
\( \hat{\theta} = (n)/(\sum_(i=1)^(n)a_(i)) \).

This result makes intuitive sense as it suggests that the MLE of
\( \theta \)is inversely proportional to the sum of the known constants
\( a_(i) \), reflecting the contribution of each observation to the overall estimate.

User Geoff Langenderfer
by
7.5k points