# Math 234作业代做、代写c/c++语言作业、代做Java，Python程序设计作业

- 首页 >> Algorithm 算法作业 Math 234 Section 1

Homework 6 - Spring 2020

Asymptotics

Due: Mar 26, 2020 11:59 pm.

Instructions: Please write neat solutions for the problems below. Show all your work. If necessary,

explain your solution in words. If you only write the answer with no work, you may not be given

any credit.

Please submit your entire homework as a single pdf file. Use pdf merging tools as necessary. For

problems 1-3, you can scan your responses using a scanner or a phone scanning app. You are not

allowed to simply take a photo of your homework due to poor lighting. The grader reserves the

right not to grade your submission if it is unclear.

Problems

1. Suppose that X1, . . . , Xn are iid with common density function

f(x | θ) = 1

2

(1 + θx), −1 ≤ x ≤ 1

for −1 ≤ θ ≤ 1. Find a consistent estimator of θ. Justify that the estimator is consistent. Hint: Find the

MOM estimator.

2. Let X1, . . . , Xn be iid with mean µ and variance σ

2 < ∞. Let g be a function such that g

0(µ) = 0 and

whose second derivative g

00 is continuous with g

00(µ) 6= 0. Recall that X¯n =1nPni=1 Xi.

(a) Show that √n|g(X¯n)−g(µ)| → 0 in distribution as n → ∞. Note that this is different from √n(g(X¯n)−g(µ)).

(b) Show that √n|g(X¯n) − g(µ)| → 0 in probability as n → ∞.

(c) Show that n[g(X¯n) − g(µ)] converges in distribution to 12g00(µ)σ2χ21 as n → ∞.

(d) Use the previous result to show that when µ = 0.5,n[X¯n(1 − X¯n) − µ(1 − µ)] → −σ2χ21 in distribution as n → ∞.

Hint: For part a), mimic the proof of the Delta method showed in class, i.e. start by using the Mean

Value Theorem. The Central Limit Theorem and Slutsky’s Theorem (Thm 5.5, p. 75 of Wasserman) may

be useful. For part b), look at Thm 5.4c of Wasserman (this was also discussed in lecture). For part c),

construct a 2nd-order Taylor series expansion of g at µ = X¯

n. You can ignore the higher-ordered terms

in the Taylor series expansion for this problem. CLT and Slutsky’s theorem may be handy in completing

your proof.

3. Continuation of Problem 3 of Homework 5. Suppose that X1, . . . , Xn are iid with distribution U(0, θ),

θ > 0 is an unknown parameter. In class, we showed that ˆθn = max(X1, . . . , Xn) is the MLE. Perform the

following.

(a) Show that ˆθn is consistent. You may freely quote the results derived in the previous homework.

(b) Show that the limiting distribution of −n(

ˆθn − θ) as n → ∞ is exponential. Hint: you may need to

refresh on L’hopital’s rule for evaluating limits.

(c) Give an approximate value for Var(ˆθn) for large n.

(d) For this example, we obtain that the MLE is asymptotically exponential instead of asymtptotically

normal as we demonstrated in lecture. Briefly explain why this does not contradict what we established

in class. Hint: Look at the sketch of the proof for asymptotic normality of MLE. We took the derivative

of the log likelihood function with respect to θ. Can you do that here?

4. R exercise. The objective of this exercise is to learn how to use R to perform bootstrap.

(a) Read p. 187-190 of the Intro to Statistical Learning book. This is identical to the lecture on bootstrapping

but with more details.

(b) Read p. 194-195 of the book. Only read the section “Estimating the Accuracy of a Statistic of

Interest”. Skip the part on the linear regression model.

(c) Work on (a), (b), (c) of Problem 9, p. 201. For part (b), an estimate of the standard error is given

by √s

n

where s is the (observed) sample standard deviation. For part (c), generate at least 1000

bootstrap samples. In addition, plot a histogram of the bootstrap samples you generated. This link

on plotting histograms may be helpful. Include your responses to a pdf file that you must submit to

NYU classes.

Homework 6 - Spring 2020

Asymptotics

Due: Mar 26, 2020 11:59 pm.

Instructions: Please write neat solutions for the problems below. Show all your work. If necessary,

explain your solution in words. If you only write the answer with no work, you may not be given

any credit.

Please submit your entire homework as a single pdf file. Use pdf merging tools as necessary. For

problems 1-3, you can scan your responses using a scanner or a phone scanning app. You are not

allowed to simply take a photo of your homework due to poor lighting. The grader reserves the

right not to grade your submission if it is unclear.

Problems

1. Suppose that X1, . . . , Xn are iid with common density function

f(x | θ) = 1

2

(1 + θx), −1 ≤ x ≤ 1

for −1 ≤ θ ≤ 1. Find a consistent estimator of θ. Justify that the estimator is consistent. Hint: Find the

MOM estimator.

2. Let X1, . . . , Xn be iid with mean µ and variance σ

2 < ∞. Let g be a function such that g

0(µ) = 0 and

whose second derivative g

00 is continuous with g

00(µ) 6= 0. Recall that X¯n =1nPni=1 Xi.

(a) Show that √n|g(X¯n)−g(µ)| → 0 in distribution as n → ∞. Note that this is different from √n(g(X¯n)−g(µ)).

(b) Show that √n|g(X¯n) − g(µ)| → 0 in probability as n → ∞.

(c) Show that n[g(X¯n) − g(µ)] converges in distribution to 12g00(µ)σ2χ21 as n → ∞.

(d) Use the previous result to show that when µ = 0.5,n[X¯n(1 − X¯n) − µ(1 − µ)] → −σ2χ21 in distribution as n → ∞.

Hint: For part a), mimic the proof of the Delta method showed in class, i.e. start by using the Mean

Value Theorem. The Central Limit Theorem and Slutsky’s Theorem (Thm 5.5, p. 75 of Wasserman) may

be useful. For part b), look at Thm 5.4c of Wasserman (this was also discussed in lecture). For part c),

construct a 2nd-order Taylor series expansion of g at µ = X¯

n. You can ignore the higher-ordered terms

in the Taylor series expansion for this problem. CLT and Slutsky’s theorem may be handy in completing

your proof.

3. Continuation of Problem 3 of Homework 5. Suppose that X1, . . . , Xn are iid with distribution U(0, θ),

θ > 0 is an unknown parameter. In class, we showed that ˆθn = max(X1, . . . , Xn) is the MLE. Perform the

following.

(a) Show that ˆθn is consistent. You may freely quote the results derived in the previous homework.

(b) Show that the limiting distribution of −n(

ˆθn − θ) as n → ∞ is exponential. Hint: you may need to

refresh on L’hopital’s rule for evaluating limits.

(c) Give an approximate value for Var(ˆθn) for large n.

(d) For this example, we obtain that the MLE is asymptotically exponential instead of asymtptotically

normal as we demonstrated in lecture. Briefly explain why this does not contradict what we established

in class. Hint: Look at the sketch of the proof for asymptotic normality of MLE. We took the derivative

of the log likelihood function with respect to θ. Can you do that here?

4. R exercise. The objective of this exercise is to learn how to use R to perform bootstrap.

(a) Read p. 187-190 of the Intro to Statistical Learning book. This is identical to the lecture on bootstrapping

but with more details.

(b) Read p. 194-195 of the book. Only read the section “Estimating the Accuracy of a Statistic of

Interest”. Skip the part on the linear regression model.

(c) Work on (a), (b), (c) of Problem 9, p. 201. For part (b), an estimate of the standard error is given

by √s

n

where s is the (observed) sample standard deviation. For part (c), generate at least 1000

bootstrap samples. In addition, plot a histogram of the bootstrap samples you generated. This link

on plotting histograms may be helpful. Include your responses to a pdf file that you must submit to

NYU classes.