Granger causality is a technique for determining whether one time series is useful in forecasting another. Ordinarily, regressions reflect "mere" correlations, but Clive Granger, who won a Nobel Prize in Economics, argued that there is an interpretation of a set of tests as revealing something about causality.
A time series X is said to Granger-cause Y if it can be shown, usually through a series of F-tests on lagged values of X (and with lagged values of Y also known), that those X values provide statistically significant information about future values of Y.
The test works by first doing a regression of ΔY on lagged values of ΔY. Once the appropriate lag interval for Y is proved significant (t-stat or p-value), subsequent regressions for lagged levels of ΔX are performed and added to the regression provided that they 1) are significant in and of themselves and 2) add explanatory power to the model. This can be repeated for multiple ΔX's (with each ΔX being tested independently of other ΔX's, but in conjunction with the proven lag level of ΔY). More than 1 lag level of a variable can be included in the final regression model, provided it is statistically significant and provides explanatory power.
The researcher is often looking for a clear story, such as X granger-causes Y but not the other way around. In the real world, often, difficult results are found such as neither granger-causes the other, or that each granger-causes the other. Furthermore, Granger causality does not imply true causality. If both X and Y are driven by a common third process, but with a different lag, there would be Granger causality. Yet, manipulation of one process would not change the other.
Here is an example of the function grangertest() in the lmtest library of the R package:
Granger causality test
Model 1: fii ~ Lags(fii, 1:5) + Lags(rM, 1:5) Model 2: fii ~ Lags(fii, 1:5)Res.Df Df F Pr(>F)1 629 2 634 5 2.5115 0.02896 * --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Granger causality test
Model 1: rM ~ Lags(rM, 1:5) + Lags(fii, 1:5) Model 2: rM ~ Lags(rM, 1:5)Res.Df Df F Pr(>F)1 629 2 634 5 1.1804 0.3172
The first Model 1 tests whether it is okay to remove lagged rM from the regression explaining FII using lagged FII. It is not (p=0.02896). The second pair of Model 1 and Model 2 finds that it is possible to remove the lagged FII from the model explaining rM using lagged rM. From this, we conclude that rM granger-causes FII but not the other way around.
The Granger test can be applied only to pairs of variables, and may produce misleading results when the true relationship involves three or more variables. (When, for example, both of the variables being tested are "caused" by a third, they may have no true relationship with each other, yet give positive results in a Granger test). A similar test involving more variables can be applied with vector autoregression. A new method for Granger causality that is not sensitive to the normal distribution of the error term is developed by Hacker and Hatemi-J (2006). This new method is specially useful in financial economics since many financial variables are non-normal.
An application to neural science is shown in this article: http://www.sciencedaily.com/releases/2008/10/081009185035.htm