91.9k views
5 votes
Jointly Gaussian random variables play an important role in probability theory, due partly to the fact that linear combinations of Gaussians are themselves Gaussian. This allows us to answer complex questions by only calculating means and variances. Here, we will explore an application of this phenomenon to the expected value. Let X1​,…,Xn​ be independent Gaussian random variables, with expected values E[Xi​]=μ and variances Var[Xi​]=σ₂ for i=1,…,n and let Y=n1​∑i=1n​Xi​ be their average. Intuitively, the random variable Y should get "closer" to μ as the number of samples n increases. Below, we will try to make this intuition precise. Define X​=⎣⎡​X1​⋮Xn​​⎦⎤​. Determine its mean vector E[X​] and covariance matrix ΣX​​.

1 Answer

5 votes

Final answer:

The mean vector E[X] for the set of independent Gaussian random variables is a vector of repeated μ values. The covariance matrix ΣX is a diagonal matrix with each diagonal element equal to the variance σ², and the off-diagonal elements are zero.

Step-by-step explanation:

The question involves finding the mean vector and covariance matrix for a set of independent Gaussian random variables X1, X2, ..., Xn. Since these variables are independent and identically distributed (i.i.d.), with each having an expected value (mean) μ and variance σ2, we can directly apply the definitions for mean vector and covariance matrix.

The mean vector E[X] of the random variables is simply a vector of their expected values, hence:

  • E[X] = [μ, μ, ..., μ]

The covariance matrix ΣX is a diagonal matrix since the variables are independent:

  • ΣX = diag(σ2, σ2, ..., σ2)

Thus, each diagonal element of ΣX is the variance of the respective random variable, and the off-diagonal elements are zero, reflecting their independence.

User Elda
by
7.3k points