I am running Monte Carlo simulations to estimate future share prices of some stocks.
For stock A, I need 1 share price exactly one year from now.
For stock B, I need daily prices for each trading day for the coming year.
Both models are simulated, lets say, 1000 times.
As dt is smaller for B, this increases the accurateness on the share price on the date one year from now. But how to prove this? And, what is the relation between the number of simulations and the time step size?
Edit: I am using 3 year lognormal daily returns to estimate volatility; drift is based on a zero-coupon bond with the term equal to the term of the option/share (in this case 1 year). Both remain constant during the simulation year. Random numbers are generated using Mersenne Twister algorithm.
Stock price at time t is being calculated by:
For A, dt = 1 $$ S_{t}= S_0 \cdot exp((r-\frac{1}{2}\sigma^2)dt+\sigma\sqrt{dt}Z) $$
For B, I am using Euler's discretization, dt = 1/255 $$ S_{t+dt}= S_t \cdot exp((r-\frac{1}{2}\sigma^2)dt+\sigma\sqrt{dt}Z) $$
No comments:
Post a Comment