Assuming an asset price S follows a geometric Brownian motion (GBM), the log returns R are distributed as Ri:=log(SiSi−1)∼N((μ−σ22)Δt,σ2Δt),i=1,…,N.
Let m=(μ−σ22)Δt and s2=σ2Δt and consider calibrating a GBM to some returns Ri. We'll use the maximum likelihood estimate for m, and for simplicity we assume s is known (as would be the case if we were generating the data through a simulation ourselves), in which case ˆm=1NM∑i=1Ri. Then the sampling distribution for the sample mean is approximately ˆm∼N(m,s2N), and an approximate (1−α)100% confidence interval for the true mean m is [ˆm−zα/2s√N,ˆm+zα/2s√N](1). In particular, increasing the number of observations N results in a smaller confidence interval. This, of course, is a standard result from elementary statistics.
On the other hand, we really need an estimate for μ in practice, and from (1) we can derive a confidence interval for ˆμ=ˆmΔt+σ22: ˆm−zα/2s√N<m<ˆm+zα/2s√N⟺ˆm−zα/2s√N<(μ−σ22)Δt<ˆm+zα/2s√N⟺ˆmΔt+σ22−zα/2sΔt√N<μ<ˆmΔt+σ22+zα/2sΔt√N⟺ˆμ−zα/2σ√NΔt<μ<ˆμ+zα/2σ√NΔt. Then, since Δt=TN for some final observation time T, a (1−α)100% confidence interval for the true drift μ is [ˆμ−zα/2σ√T,ˆμ+zα/2σ√T]. In particular, increasing the number of observations N has no effect on the confidence interval for the drift μ. Instead, we only obtain a smaller confidence interval by increasing the final time, T.
Indeed, for fixed T we may think of obtaining higher and higher frequency data so that N becomes larger and larger. But then Δt becomes smaller and smaller by definition, such that dt=TN. This seems quite counter intuitive: for fixed T, no matter if I have 1,000 or 1e16 observations, I get no closer to my true drift μ. On the other hand, if I have only 10 observations over 100 years, I get a much better estimate of μ.
Have I overlooked something? Perhaps this is a well-known problem with estimating the drift that I'm not aware of?
Answer
Yes, you are correct. Consider the following toy example:
1) Log prices follow: dpt=μdt+σdWt
2) Then: rt+h,h=pr+h,h−pt N(μh,σ2h) 3) standard ML estimators:
- ˆμ=1nh∑k=1rkh,h
- ^σ2=1nh∑k=1(rkh,h−ˆμh)2
Assymptotic distribution of estimators:
- √T(ˆμ−μ)→N(0,σ2)
- √n(^σ2−σ2)→N(0,σ4)
So when n tends to infinity we get precise estimator of σ2 , and when T tends to infinity we get it for μ.
This was first noted by Merton (1980).
No comments:
Post a Comment