The maximum likelihood estimators for a Gaussian random variable with unknown mean and variance are the sample mean and biased sample variance. The expected values are the true mean and a biased estimate of the true variance.
In the context of a Gaussian random variable with unknown mean
and unknown variance
, the likelihood function is given by the product of the probability density functions (PDFs) of the individual measurements. The probability density function for a Gaussian distribution is given by:
![\[ f(v_i; \mu, \sigma^2) = (1)/(√(2\pi\sigma^2)) e^{-((v_i - \mu)^2)/(2\sigma^2)} \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/mogmxn8lglj4l9h4njguhp1zl9st89wwhq.png)
The likelihood function (L) for n independent measurements is the product of the individual PDFs:
![\[ L(\mu, \sigma^2) = \prod_(i=1)^(n) (1)/(√(2\pi\sigma^2)) e^{-((v_i - \mu)^2)/(2\sigma^2)} \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/vndgobkheo0uttgp4v8oo8g3npc16aysgg.png)
To find the maximum likelihood estimators (MLEs), we take the partial derivatives of the logarithm of the likelihood function with respect to
and
, set them to zero, and solve for the parameters.
1. MLE for
:
![\[ (\partial \ln L)/(\partial \mu) = (1)/(\sigma^2) \sum_(i=1)^(n) (v_i - \mu) = 0 \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/u9rju0wesq5d9vd43nu3eyx1p8sezev6jv.png)
Solving for
, we get:
![\[ \hat{\mu} = (1)/(n) \sum_(i=1)^(n) v_i \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/zl040nprzcpisfqx2uzatja658viwmkf2l.png)
The MLE for the mean
is the sample mean.
2. MLE for
:
![\[ (\partial \ln L)/(\partial (\sigma^2)) = -(n)/(2\sigma^2) + (1)/(2\sigma^4) \sum_(i=1)^(n) (v_i - \mu)^2 = 0 \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/s5s87wy2zb2r7w410y05yuytlrshdw9fsn.png)
Solving for
, we get:
![\[ \hat{\sigma^2} = (1)/(n) \sum_(i=1)^(n) (v_i - \hat{\mu})^2 \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/ab035kqp4byjbdetswa7mkn8y31kv56eqt.png)
The MLE for the variance
is the sample variance.
Now, let's calculate the expected values of these estimates as functions of the true parameters:
1. Expected value of
:
![\[ E(\hat{\mu}) = E\left((1)/(n) \sum_(i=1)^(n) v_i\right) = (1)/(n) \sum_(i=1)^(n) E(v_i) \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/wkef3e4wokv0sgqmyc3uhae47ypjldfac0.png)
Since v is a Gaussian random variable,
, so:
![\[ E(\hat{\mu}) = (1)/(n) \sum_(i=1)^(n) \mu = \mu \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/sl10v8l11ruav0w08w9faq4zo1jauwgvgo.png)
The expected value of the sample mean is the true mean
.
2. Expected value of
:
![\[ E(\hat{\sigma^2}) = E\left((1)/(n) \sum_(i=1)^(n) (v_i - \hat{\mu})^2\right) \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/wbs2q2w5srtpxl4hqs6mvtcj795ci3ha4w.png)
Expanding this expression involves moments of the distribution and is more complex, but it can be shown that:
![\[ E(\hat{\sigma^2}) = (n-1)/(n) \sigma^2 \]](https://img.qammunity.org/2024/formulas/mathematics/high-school/p0cj9w0ern02h52xqm0p8czp6zlbqlidhu.png)
The expected value of the sample variance is a biased estimator of the true variance.
In summary, the maximum likelihood estimators for the unknown parameters are the sample mean
and the biased sample variance
. The expected value of the sample mean is the true mean
, and the expected value of the sample variance is a biased estimator of the true variance
.