Final answer:
The entropy of a discrete random variable, I(X), is calculated using the probabilities of its possible outcomes. For two tossed coins, outcomes are analyzed for the number of heads. For two tossed dice, outcomes for the number of sixes are considered. Lastly, a binary variable's entropy is plotted across a probability range, reaching its maximum value when p=0.5.
Step-by-step explanation:
Finding the Entropy, I(X)
To find the entropy, I(X), for the given scenarios, recall that entropy measures the unpredictability of a variable's possible outcomes. In information theory, the entropy, I(X), of a discrete random variable X is calculated as:
I(X) = -∑ P(x_i) log_2(P(x_i))
where x_i are the possible values of X and P(x_i) is the probability of x_i.
(a) Number of Heads in Two Coin Tosses
When two fair coins are tossed once, the possible outcomes for X (the number of Heads) are 0, 1, or 2. The probabilities for these outcomes, assuming a fair coin, are 1/4 for 0 heads (TT), 1/2 for 1 head (HT or TH), and 1/4 for 2 heads (HH).
(b) Number of Sixes in Dice Toss
When a pair of fair dice is tossed, the variable X (number of 'Sixes') can be 0, 1, or 2. The probability of getting a six on any die is 1/6, so we need to calculate the probability for each outcome.
(c) Binary Variable X
For a binary variable X that takes {0,1} with P[X=1]=p, the entropy I(X) is obtained by considering the two possible outcomes:
I(X) = -(p log_2(p) + (1-p) log_2(1-p))
The maximum value of I(X) occurs when p=0.5, which results in I(X) = 1 bit. If we plot I(X) for 0≤p≤1, it forms a concave curve, reaching its maximum at p=0.5.