In Rockafellar-Uryasev 2001 paper the mean-CVaR optimization can be written as a linear programming optimization problem as:
PCVaR=argmin subject to: y_s \geq f(\mathbf{w},\mathbf{r_s})-\text{VaR}_\alpha y_s\geq 0 where y_s = [f(\mathbf{w},\mathbf{r_s})-\text{VaR}_\alpha]^+ I have 2 questions regarding this optimization:
- How is the VaR computed? or while programming the optimization the user has to program the way the VaR is going to be computed. Here (http://past.rinfinance.com/agenda/2009/yollin_slides.pdf) is an R code to do the optimization but I don't see anywhere VaR computation.
- Isn't the first restriction obvious? given the definition of y_s or my understanding of y_s is incorrect?
Answer
- VaR_\alpha is a scalar choice variable in the minimization problem. In the Rockafeller-Uryasev paper, it is simply called \alpha\in R. (C.f., the program described in Theorem 2 of that paper, or the programming problem described after equation (17); alternatively, look at the structure of the choice vector x on page 16 of the Yollin slides.) VaR_\alpha is thus a by-product of solving the problem.
- The fact that y_{s}=[f(w,rs)−VaR_\alpha]^{+} is also a by-product of solving the problem; it is not imposed. Rather, the y vector is an auxiliary choice variable. (Your y plays the role of d in the Yollin slides, which are embedded in the choice vector x on page 16.) The constraints impose the equality implicitly. But if that equality were imposed directly in the program, the constraint set would fail to be convex (the problem would not be a linear programming one), and the solution would be greatly complicated.
No comments:
Post a Comment