Our Privacy Statment & Cookie Policy

All LSEG websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

The Financial & Risk business of Thomson Reuters is now Refinitiv

All names and marks owned by Thomson Reuters, including "Thomson", "Reuters" and the Kinesis logo are used under license from Thomson Reuters and its affiliated companies.

June 16, 2013

Forecasting Realized Volatility – Part 2

by andrew.clark.

Last week we asked such questions as: Do people invest (or trade) at different time horizons? Are there different views of risk amongst investors? And are there different views of where the same stock’s price will be one week, one month or one year from now?  We answered yes to all these questions which means the market is heterogeneous.  We established the next statement as one that is characteristic of heterogeneuous markets: Objects are analysed on different scales, with different degrees of resolution, and the results are compared and interrelated.  Using multiresolution analysis (MRA) allows us analyze the market at different time scales.  And nonlinear dynamics allows us to compile the MRA results.  From there we can forecast certain market factors such as volatility.

So how can we describe the dynamics of the components generated by the MRA?  First we see if they’re random or deterministic.  We look at this

REUTERS/Brian Snyder

REUTERS/Brian Snyder

because it is important to distinguish the two. There are vastly different forecasting techniques used for random, as opposed to deterministic processes.  Again, we will not describe the mathematics that are used to decide into which category market components fall  but we can say that for all but two short-dated components we have strong indications of determinism.  This is helpful to know.  The two short-dated components that are random can be forecasted using fairly well known techniques (such as so-called ARIMA models).  The random results also confirm for us that, at short time intervals, volatility can be seen as a random (or stochastic) process.

As for the remaining components or actor groups, they multiresolution analysis suggests that they are deterministic, albeit for a period of time that extends  up to one month into the future [4].  Beyond that, the determinism breaks down, and it appears that the process becomes stochastic again.  Fortunately, we don’t have to know what happens after the determinism breaks down in order to make our volatility forecast.

Now we search for the right model to get a good forecast of those components that fit into the deterministic model.  For the S&P 500 we find out that two simple models give us good forecasts.  Now we have all the pieces in place to forecast volatility: a stochastic model for the short-dated components and these two simple models for the remaining components.  Now we will forecast each of the components separately and, according to the rules of the MRA decomposition, add up the individual forecasts to get our RVI volatility forecast.

Testing The Accuracy Of The RVI

In a test, we forecast the S&P 500 monthly volatility from January 2007 through December 2010.  We forecast the 21 day volatility of the index 5, 10 and 21 days into the future using three models: I-GARCH(1), LM-ARCH and the RVI. I-GARCH(1) is very similar to the RiskMetric model that has been used by practitioners since 1998. Its updated version, the LM-ARCH, is the new measure of volatility RiskMetrics began to offer in 2006.  Both methodologies are available on the RiskMetric [6] website for the mathematically inclined.

We add the VIX to our 21 day comparison. However, by contrast with the other volatility measures, the VIX is a measure of implied volatility and not an estimate of the realised (or actual) volatility that an investor will experience in the future[5].   So although we will compare the VIX to the RVI and the other forecasts, the comparison must be taken with a grain of salt.

We define volatility as the annualised 21 business day standard deviation of returns.  This chosen time frame puts us in sync with the basic properties of the VIX.

We use 3 common error measures to evaluate the forecasts: root mean squared error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE).  RMSE is a good measure of precision. RMSE aggregate the errors into a single measure of predictive power.  The errors typically occur because the forecasting technique doesn’t account for information that could produce a more accurate estimate.  The MAE is self-explanatory.  The MAPE is the average of the absolute value of the percentage difference between forecast and actual.   This percentage error allows one to compare the error of fitted time series that differ in level.  Comparing the forecasts and actuals at different levels is important as during the test period volatility spikes were significant in both 2008 and 2009.

The results of our tests are given in the three tables below.

Error Measures For Volatility Forecasting Methods

Figure 1: Root Mean Squared Error

No. of Forecast Days

IGARCH

LM-ARCH

RVI

5

6.1

6.0

2.4

10

5.2

4.9

3.4

21

4.9

4.0

3.7

 

Figure 2: Mean Absolute Error

No. of Forecast Days

IGARCH

LM-ARCH

RVI

5

6.0

3.0

2.0

10

5.9

3.9

2.9

21

5.3

4.3

3.3

Figure 3: Mean Absolute Percentage Error

No. of Forecast Days

IGARCH

LM-ARCH

RVI

5

18.0

13.1

12.3

10

17.4

15.0

13.1

21

16.1

16.1

14.0

For each of the 3 tests, the RVI is the best-performing forecast method (i.e., resulting in the lowest error scores).  It clearly outperforms the IGARCH method and is better than the LM-ARCH, though the RMSE results over a 21-day forecast period for LM-ARCH and RVI are close.

As the VIX is just a 21 day forecast the author will show here the RMSE, MAE and MAPE for the VIX: 12.8%, 8.6% and 29%.  So the VIX is an inferior 21 day realised volatility forecast whether the error measure be RMSE, MAE or MAPE.  However, it needs to be repeated again that the VIX is a forward-looking measure of implied volatility, not a forecast of realised volatility.

Our results speak well of the RVI and could encourage exchanges and ETF providers to build an RVI derivative.  The RVI can be computed for any stock and commodity that trades and has at least 10 years of price history, even if there are no options for it.  By contrast, the VIX needs options prices in order to be calculated.  The RVI could be built in India, for example, where few options trade.  The RVI complements implied volatility indices, such as the VIX.  Using realised and implied volatility indices together would help risk managers look across the spectrum of stock and commodity index products, something they cannot do now.  And realised volatility indices should help portfolio managers manage the volatility risk of their portfolios across all markets, while implied volatility measures only exist for selected indices and securities.

This series of articles first appeared in the Journal of Indexes in 2011.  The methodolgy in these articles is covered by patents issued in both the U.S. and the EU.

[4] For the mathematically inclined, the nonlinear dynamics allow the reconstruction of the vector space and from that forecasts can be made.  The length of the forecast horizon is determined, in part, by the Lyapunov exponent.

 [5] Implied volatility has particular idiosyncrasies related to the option market: it reflects the supply and demand of the underlying security necessary to implement the replication strategy. Similarly, options bear volatility risk and a related volatility risk premium can be expected. These particular effects could bias the implied volatility upward.

Article Topics
Article Keywords , , ,

Get In Touch

Subscribe

We have updated our Privacy Statement. Before you continue, please read our new Privacy Statement and familiarize yourself with the terms.x