210k views
0 votes
5.2.14. For the negative binomial pdf p (k; p, r) = k+r−1 (1 − p)kpr, find the maximum likelihood k estimator for p if r is known.

2 Answers

7 votes

Final answer:

To find the maximum likelihood estimator for p in the negative binomial distribution when r is known, we need to maximize the likelihood function. The likelihood function is given by L(p) = k+r-1 * (1-p)^k * p^r.

Step-by-step explanation:

To find the maximum likelihood estimator for p in the negative binomial distribution when r is known, we need to maximize the likelihood function. The likelihood function is given by:

L(p) = k+r-1 * (1-p)^k * p^r

To find the maximum likelihood estimate for p, we take the derivative of the likelihood function with respect to p, set it equal to zero, and solve for p. Let's do that:

dL(p)/dp = 0

Simplifying this equation gives:

-kr * (1-p)^(k-1) * p^(r-1) + (k+r-1) * (1-p)^k * p^(r-1) = 0

Now we solve this equation for p. This can be a bit complicated, but it can be done using algebraic manipulation and numerical methods if needed. The resulting solution will be the maximum likelihood estimator for p given that r is known.

User Jblasco
by
6.2k points
2 votes

Answer:


\hat p = (r)/(\bar x +r)

Step-by-step explanation:

A negative binomial random variable "is the number X of repeated trials to produce r successes in a negative binomial experiment. The probability distribution of a negative binomial random variable is called a negative binomial distribution, this distribution is known as the Pascal distribution".

And the probability mass function is given by:


P(X=x) = (x+r-1 C k)p^r (1-p)^(x)

Where r represent the number successes after the k failures and p is the probability of a success on any given trial.

Solution to the problem

For this case the likehoof function is given by:


L(\theta , x_i) = \prod_(i=1)^n f(\theta ,x_i)

If we replace the mass function we got:


L(p, x_i) = \prod_(i=1)^n (x_i +r-1 C k) p^r (1-p)^(x_i)

When we take the derivate of the likehood function we got:


l(p,x_i) = \sum_(i=1)^n [log (x_i +r-1 C k) + r log(p) + x_i log(1-p)]

And in order to estimate the likehood estimator for p we need to take the derivate from the last expression and we got:


(dl(p,x_i))/(dp) = \sum_(i=1)^n (r)/(p) -(x_i)/(1-p)

And we can separete the sum and we got:


(dl(p,x_i))/(dp) = \sum_(i=1)^n (r)/(p) -\sum_(i=1)^n (x_i)/(1-p)

Now we need to find the critical point setting equal to zero this derivate and we got:


(dl(p,x_i))/(dp) = \sum_(i=1)^n (r)/(p) -\sum_(i=1)^n (x_i)/(1-p)=0


\sum_(i=1)^n (r)/(p) =\sum_(i=1)^n (x_i)/(1-p)

For the left and right part of the expression we just have this using the properties for a sum and taking in count that p is a fixed value:


(nr)/(p)= (\sum_(i=1)^n x_i)/(1-p)

Now we need to solve the value of
\hat p from the last equation like this:


nr(1-p) = p \sum_(i=1)^n x_i


nr -nrp =p \sum_(i=1)^n x_i


p \sum_(i=1)^n x_i +nrp = nr


p[\sum_(i=1)^n x_i +nr]= nr

And if we solve for
\hat p we got:


\hat p = (nr)/(\sum_(i=1)^n x_i +nr)

And if we divide numerator and denominator by n we got:


\hat p = (r)/(\bar x +r)

Since
\bar x = (\sum_(i=1)^n x_i)/(n)

User Michael Dickens
by
6.6k points