Risks for the Long Run: A Potential Resolution of Asset Pricing Puzzles

Bansal, Ravi, and Amir Yaron, “Risks for the Long Run: A Potential Resolution of Asset Pricing Puzzles,” The Journal of Finance Vol. 59 (2004), 1481-1509.

Like Barro (2006), Bansal & Yaron try to resolve the Mehra-Prescott (1985) equity premium puzzle by adding more risk.  Whereas Barro’s risk is the small probability of a major economic disaster, Bansal and Yaron’s risk is a permanent component in the consumption growth process.

With a permanent component, a negative shock at any point in time to consumption should have effects reaching far into the future.  The shocks compound over time, in a sense.  This means consumers/investors should be very sensitive to news about consumption growth.

The authors decline to give economic rationale for why consumption growth should have a permanent component.  They say in the same breath that it is econometrically impossible to distinguish between iid consumption growth and a growth process with a permanent component, and that their model is consistent with observed data.  It follows that the opposite story (iid growth) is also consistent with the data!

Bansal and Yaron also equate consumption to expenditures, thus ignoring both savings and durable goods, both of which I believe have important implications for consumer behavior and asset pricing.

Rare Disasters and Asset Prices in the Twentieth Century

Barro, Robart J, “Rare Disasters and Asset Prices in the Twentieth Century,” The Quarterly Journal of Economics Vol. 121 No. 3 (2006), 823-866.

Mehra and Prescott (1985) find an equity premium puzzle, which is that either

  • the observed equity premium and volatility imply very high consumer risk aversion, or
  • a more reasonable level of risk aversion would imply a risk free rate that is higher and more variable than the one we see in the data.

At a very high level, Barro and others attempt to solve the puzzle by introducing more risk into their models.  Rather than suggesting that consumers are indeed extremely risk averse, they allow risk aversion to be low but suggest Mehra and Prescott simply didn’t account for enough risk.  This explanation is extremely intuitive and flexible enough, I think to encompass a wide variety of methods for adding risk.  So, I find the first few pages of this article instructive.  The probability of a rare disaster could certainly be something consumers have in mind.  The logic is solid.  The rest of the paper discusses a calibration that I find unconvincing at best.

The problems with the calibration are as follows.

  • Barro relies on a very small sample of rare disasters.  Although he cites 60 instances, they can mostly be reduced to four “events”: WWI, WWII, the Great Depression, and two decades of revolutions in South America.  It is difficult to believe that anything can be inferred from the timing of these occurrences.
  • In order to make inferences, Barro relies on three very strong assumptions, which are that (1) events of major social unrest are uncorrelated with one another, that (2) they are randomly and uniformly distributed across countries, and that (3) they are randomly and uniformly distributed across time.  This third assumption means that the probability of disaster is constant across time.
  • Finally, Barro conflates consumption and expenditures.  In other words, there are no durable goods and the representative consumer has zero savings.  When GDP falls, Barro assumes that consumption also falls and so expected stock returns must be high to offset this risk.  Durable goods and savings are two methods that consumers can use to smooth actual consumption.

Barro’s base model is meant to be simple, and he does acknowledge in closing that stochastic default probability would be an obvious way to extend the model.  I can allow for the theoretical limitations.  The calibration, however, I find too incredible to be very useful.  If anything, he shows that a 1.7% chance of a bad event is about the right amount of extra risk to solve the equity premium puzzle.  He is not convincing that 1.7% is anywhere close to the actual probability of another world war or communist revolution.

Endogenous Disasters and Asset Prices

Petrosky-Nadeau, Nicolas, Lu Zhang, Lars-Alexander Kuehn, “Endogenous Disasters and Asset Prices,” Charles A. Dice Center Working Paper No. 2012-1 (October 1, 2013).

Purpose: This model produces a realistic equity premium and stock return variance, and endogenously leads to rare economic disasters, at the confluence of small corporate profits, large job flows, and frictions in the matching process that connects unemployed workers with job vacancies.

Model:

  1. A representative household, with both employed and unemployed members, chooses its optimal consumption and asset allocation (holdings of shares in a representative firm and of a risk-free bond).
  2. A representative firm posts job vacancies, and unemployed workers apply for them.
    1. vacancies are costly for the firm.
  3. The labor market is a matching function that produces jobs using vacancies and unemployed workers as inputs.
    1. Matching frictions are composed of fixed and variable hiring costs.
    2. The wage rate is determined by a Nash bargaining process.

Results:

  1. The model generates an equity premium of 5.70%, versus 5.07% in the data (adjusted for financial leverage).
  2. Annual stock market volatility in the model is 10.83%, versus 12.94% in the data.
  3. The model’s interest rate volatility is 1.34%, versus the observed 1.87%.
  4. The equity premium is countercyclical, both in the model and in the data.
  5. The ratio of vacancies to unemployed workers forecasts (with a negative slope) excess returns; this is confirmed in the data.
  6. Rare disasters are endogenous.
    1. The average peak-to-trough magnitude of a disaster is roughly 20%, both modeled and observed.
    2. The probability of a consumption disaster is 3.08% in the model and 3.63% in the data.
    3. The probability of a GDP disaster is 4.66% in the model and 3.69% in the data.
  7. Comparative statics
    1. The value of workers’ activities in unemployment are assumed to have a high value, which makes wages inelastic. When output falls in hard times, wages fall less, and so the cyclical nature of profits and dividends is magnified. This raises the equity premium and makes the stock market more volatile compared to other models.
    2. Job flows are assumed to be about 5%, consistent with previous literature (5% of the workforce quits each month), so frictions in the matching process contribute to macroeconomic risk.
    3. Matching frictions (especially fixed hiring costs) cause marginal hiring costs to fall slowly in a recession and to rise quickly in an expansion.
      1. In a recession, there are many unemployed workers and few vacancies. An additional vacancy has only a slight impact on the likelihood of an existing vacancy being filled, so hiring costs fall slowly.  As workers continue to attrite  at a 5% rate, hiring may not keep up and the economy may fall off a cliff.
      2. In an expansion, there are few unemployed workers and many vacancies.  An additional vacancy in an expansion has a large (negative) impact on the likelihood of a vacancy being filled, so marginal hiring costs rise quickly, hampering the expansion.

Flights to Safety

Baele, Lieven, Geert Bekaert, Koen Inghelbrecht, and Min Wei, “Flights to Safety,” American Finance Association 75th Annual Meeting, Boston (2015).

Purpose:  To propose an empirical definition of a “flight to safety” episode, using only stock and bond return data.

Claim:  A “flight to safety” (FTS) is a day on which

  • Bond returns are positive.
  • Equity returns are negative.
  • Bond returns are negatively correlated with stock returns.
  • Equity return volatility is large (markets are stressed).

Methods:

  • Data covers bond and equity returns for 23 countries from January 1980 through January 2012.
  • In the literature, flights to liquidity may be as important as flights to quality.  Therefore, this paper looks at returns on highly-liquid 10-yeargovernment bonds.
    • German bonds are the benchmark for Eurozone countries; local government bonds are the benchmark for all others.
  • Equity returns are indexes denominated in local currencies, from Datastream International.
  • Develop a composite flight-to-safety indicator
    • Sort observations by variables that are conceptually increasing in likelihood of flight to safety.
    • Assign a ranking to each observation for each sort, then divide each ranking number by the total number of observations–the “ordinal numbers.”
    • Identify days that qualitatively appear to be “mild” flight-to-safety episodes:
      • bond returns are higher than stock returns,
      • bond returns are further above their 250-day average than are stock returns,
      • the short-term stock-bond correlation is negative,
      • the long-term stock-bond correlation is higher than the short-term correlation (it is less negative or positive),
      • equity return volatility is more than one standard deviation above its mean, and
      • short-term equity volatility is higher than long-term volatility.
    • Observations that fail to meet the qualitative test are given a FTS indicator of zero.
    • Observations that pass the test are given an indicator of 1 minus the percentage of observations failing the test that have a higher ordinal number.

Results:

  • This methodology identifies major market crashes, including October 1987, the Russia crisis of 1998, and the Lehman bankruptcy.
  • In a flight to safety
    • Bond returns are 2%-3% higher than equity returns.
    • The Yen, US Dollar, and Swiss Franc appreciate.
    • The VIX increases.
    • Consumer sentiment falls.
    • Money-markets, corporate bonds, and non-metal commodities have negative abnormal returns.
    • Liquidity suffers in both bond and equity markets.
  • Immediately following a flight to safety, economic growth and inflation decline for up to one year.

Cross-Sectional Dispersion in Economic Forecasts and Expected Stock Returns

Bali, Turin G., Stephen J. Brown, and Yi Tang, “Cross-Sectional Dispersion in Economic Forecasts and Expected Stock Returns,” The American Finance Association 75th Annual Meeting, Boston (2015).

Purpose:  To show that economic uncertainty is an economically and statistically significant driver of the cross-section of stock returns.

Motivation:  In the ICAPM world, investors care not only about the expected payoff of their investments, but also about their portfolios’ covariances with state variables affecting both future consumption and opportunities for investment.

Data/Methods:

  • Measure economic uncertainty using
    • the dispersion of forecasts from the Survey of Professional Forecasters
      • real GDP growth and real GDP level
      • log (75th pctl forecast / 25th pctl forecast) * 100
    • cross-sectional dispersion in forecasts for output, inflation, and unemployment
  • Fama-MacBeth regressions
    • Sort into deciles based on market beta.
    • Find time-varying “uncertainty betas” of stocks using rolling regressions of stock excess returns on the uncertainty measure, and sort into subdeciles.
  • Economic Uncertainty Index
    • Use Principal Component Analysis to find the common component among seven different proxies for economic uncertainty.

Results:

  • Covariance with economic uncertainty is significantly negatively correlated with higher returns, after controlling for market beta, size, book-to-market, momentum, short-term reversal, illiquidity, co-skewness, idiosyncratic volatility, and the dispersion of analyst forecasts.
  • The beta of the proposed “uncertainty index” appears able to significantly predict future stock returns.

The Conditional CAPM and the Cross-section of Expected Returns

Jagannathan, Ravi, and Zhenyu Wang, “The Conditional CAPM and the Cross-section of Expected Returns,” The Journal of Finance, Vol 51, No 1 (1996), 3-53.

Purpose:  Use a modifed “conditional” CAPM, which includes the return on human capital as part of the return on the market portfolio and which allows betas to vary across time periods, to explain the cross-section of stock returns.

Results:  The Conditional CAPM (CCAPM) explains the cross-section of a large portfolio of stock returns very well.  The CCAPM explains 30% of the cross-sectional variation, while the static CAPM (assuming constant betas) explains only 1%.  When the return on human capital is included in the market return, the CCAPM explains 50%, and size and book-to-market factors have very little explanatory power.

Theory:  The static CAPM of Sharpe, Lintner, and Black is given by the equation E(R_i) = \gamma_0 + \gamma_1 \beta_i, where \beta_i = \frac{\text{cov}(R_i, \: R_m)}{\text{var}(R_m)} is the regression coefficient of stock return i on the market return.  Letting the variables on the right-hand side vary depending on investors’ information set I_{t-1}.  The Conditional CAPM is

E(R_{it}|I{t-1}) = \gamma_{0t-1} + \gamma_{1t-1} \beta_{it-1} where \beta_{it-1} = \frac{\text{cov}(R_{it}, \: R_{mt} | I_{t-1})}{\text{var}(R_{mt} | I_{t-1})}.

Taking the unconditional expectation of both sides,

E(R_{it}) = \gamma_0 + \gamma_1 \overline{\beta}_i + \text{cov}(\gamma_{1t-1}, \: \beta_{it-1}).

Now define the “beta-premium sensitivity” as \vartheta_i = \frac{\text{cov}(\beta_{it-1}, \: \gamma_{1t-1})}{\text{var}(\gamma_{1t-1})}, and substitute to get the CCAPM form

E(R_{it}) = \gamma_0 + \gamma_1 \overline{\beta}_i + \text{var}(\gamma_{1t-1}) \vartheta_i.

The PL-Model:

  • \overline{\beta}_i \text{ and } \vartheta_i are not observable, so define two unconditional betas:
    • \beta_i \equiv \frac{\text{cov}(R_{it}, \: R_{mt})}{\text{var}(R_{mt})}.
      • This “market beta” is the standard beta from the CAPM.
      • Decompose the market beta into one beta for the stock market, from a regression of stock returns on market returns, and another beta for the return to human capital, from a regression of stock returns on the growth rate of per capita labor income.
    • \beta_i^{\gamma} \equiv \frac{\text{cov}(R_{it}, \: \gamma_{1t-1})}{\text{var}(\gamma_{1t-1})}.
      • This “premium beta” is not a linear function of the market beta.
      • Use the yield spread between BAA- and AAA-rated bonds to proxy for \gamma_{1t-1} in calculating the premium beta.
  • The “PL-model” (Premium-Labor):  E(R_{it}) = c_0 + c_{vw} {\beta_i}^{vw} + c_{prem} {\beta_i}^{prem} + c_{labor} {\beta_i}^{labor}.
  • Also add size to the right-hand side to verify whether there is any residual size effect (the coefficient of this term should be zero if the PL-model holds).
  • Use the Generalized Method of Moments to test the PL-model.

Empirical Tests:

  • Starting in 1963, create 100 size and beta portfolios at the end of each June as in Fama and French (1992)
    • Use all NYSE and AMEX stocks in CRSP.
    • Sort into size deciles (based on the NYSE/AMEX universe)
    • Subdivide each size decile into beta sub-deciles based on pre-ranking betas
      • Find pre-ranking betas for each stock over the previous 60 months, and require at least 24 months of data
      • Regress stock returns on CRSP value-weighted returns
    • Calculate each portfolio’s equal-weighted return for each of the next 12 months after portfolio formation
  • Find portfolio value-weighted betas by regressing monthly portfolio returns on CRSP value-weighted index returns over the entire sample period.
  • Portfolio premium betas come from month-by-month regressions of portfolio return on the yield spread between AAA- and BAA-bonds.
    • Get bond yields from Table 1.35 in the Federal Reserve Bulletin.
  • Portfolio labor betas are found by regressing portfolio return on the growth in (the two-period moving average of) labor income, defined by
    • {R_t}^{labor} = [L_{t-1} + L_{t-2}]/ [L_{t-2} + L_{t-3}].
    • Personal income growth data come from Table 2.2 in the National Income and Product Account of the U.S.A., from the Bureau of Economic Analysis.
  • Portfolio size is the equal-weighted market value, defined as the natural log of (number of shares x share price) from CRSP.
  • Perform time-series regressions of portfolio returns on size and on the variables in the PL-model.

The GMM Test:

  • It is possible to define a stochastic discount factor (SDF) d_t{\delta} = \delta_0 + \delta_{vw} {R_t}^{vw} + \delta_{prem} {R_{t-1}}^{prem} + \delta_{labor} {R_t}^{labor} such that E(R_{it}d_t) = 1.
  • E[w_t(\delta)] is the vector of pricing errors in the model.
  • Choose \delta to minimize the quadratic form E[w_t(\delta)]'[A]E[w_t(\delta)].
    • {E[w_t(\delta)]} = D_T \delta - 1_N.
    • D_T = \frac{1}{T} {\sum_{t=1}}^T R_t Y_t'.
    • R_t is the Nx1 vector of portfolio returns in month t.
    • Y_t is the (up to) 4×1 vector of parameters (1, {R_t}^{vw}, {R_{t-1}}^{prem}, {R_t}^{labor}).
    • The Hansen-Singleton (1982) “optimal” weighting matrix A = [\text{var}(w_t(\delta))]^{-1} is model-specific, so it cannot be used to compare models.
    • Instead, choose the NxN weighting matrix {G_T}^{-1} = \left[ \frac{1}{T} {\sum_{t=1}}^T R_t R_t' \right]^{-1}, which does not vary across models.
  • \delta_T = {({D_T}' {G_T}^{-1} {D_T})}^{-1} {D_T}' {G_T}^{-1} 1_N
  • The square-root of the minimized quadratic form, or Hansen-Jagannathan (HJ) distance, is the distance between the model’s SDF and the set of SDFs that correctly price the sample assets.  It is also the largest pricing error among all the portfolios being tested.

Stock Market Valuations across U.S. States

Bekaert, Geert, Campbell R. Harvey, Christian T. Lundblad, and Stephan Siegel, “Stock Market Valuations across U.S. States” American Finance Association, 75th Annual Meeting, Boston (2015).

Purpose:  To show that state-specific regulatory environment affects valuation, and to estimate the marginal impact of regulation.

Findings:

  • After controlling for leverage and earnings growth volatility, PE ratios vary across states within the same industry (segmentation).
  • State-specific financial deregulation decrease segmentation.
  • Increased labor laws increase segmentation.
  • Higher state-specific unemployment is linked with higher segmentation.
  • Higher population density is linked with lower segmentation.
  • Segmentation has been decreasing since the mid-1970s.
  • Distance between a given state’s capital and New York’s capital is a statistically significant, but economically small, determinant of segmentation

Methods:

  • Calculate the absolute difference in P/E ratios between an industry in a given state and the same industry in New York (the financial center of the U.S.)
    • price data comes from CRSP, with earnings data from Compustat
    • noise biases the measure upwards, so the measure is smaller for years or states with smaller firms
  • A state’s level of segmentation is given by the value-weighted sum of the measure for all industries in the state.
  • Regress the segmentation measure on variables
    • difference in leverage between industries in the given state and the same industries in New York
    • difference in earnings growth
    • difference in return volatility
    • number of firms in the state
    • time
  • Classify a number of regulation changes that were made during the time sample, and conduct difference in differences tests.

Asset Prices and Business Cycles with Financial Shocks

Nezefat, Mahdi, and Ctirad Slavik, “Asset Prices and Business Cycles with Financial Shocks,” American Finance Association, 75th Annual Meeting, Boston (2015).

Purpose:  This paper introduces a DSGE asset pricing model in which shocks to firm productivity and to firm financial constraints lead to asset price volatility.

Model:

  • Setup
    • two consumers (entrepreneurs and laborers)
    • two goods (a consumption good and a capital good)
    • infinite and discrete time, with two subperiods in each period
      • in subperiod 1, all entrepreneurs hire labor and produce using the same technology.
      • in subperiod 2, a fraction of entrepreneurs are randomly presented with investment opportunities (i.e. the ability to transform the consumption good one-to-one into capital, without adjustment costs).
        • Firms not investing in new projects can purchase equity in other firms.
    • Equity is the only asset traded in the market (incomplete markets).
    • Firms’ financial constraint (the financial friction) is that there is a limit on how much of each new project can be sold as equity.
      • This limit changes over time, which is a theoretical contribution of this model.
    • Entrepreneurs and laborers maximize the present value of their consumption subject to a budget constraint, and wages and return on equity are determined competitively, and markets clear.
  • Productivity and financial shocks:
    • Productivity shocks affect the wealth of all firms, changing how much they can spend on equity.
    • Financial shocks affect the funding of firms with investment projects, and determine how much equity is available to the market.
    • These two shocks directly influence the amount of equity traded and investors’ budget constraints, and so directly contribute to fluctuations in asset prices.

Results:

  • After calibrating the model to the U.S. economy, productivity shocks alone explain little of the observed volatility.
  • With both types of shocks, modeled asset price volatility is about 80% of the observed volatility of the stock market.
  • The model explains 70% of the observed equity premium.
  • This model also generates the volatility in investment that is observed in the data.
  • Unlike previous models, the equilibrium here is not Pareto optimal.  The government could increase all agents’ welfare by relaxing the financial constraints of entrepreneurs with investment projects by extending loans to them.

The Cross-Section of Volatility and Expected Returns

Ang, Andrew, Robert J. Hodrick, Yuhang Xing, and Xiaoyan Zhang, “The Cross-Section of Volatility and Expected Returns”, The Journal of Finance, Vol 61, No 1 (2006), 259-299.

Purpose:  To show that stocks with high volatility have low average returns.

Findings:

  • Stocks that are sensitive to aggregate volatility earn low average returns.
  • Stocks with high idiosyncratic volatility also earn low average returns.
    • This effect cannot be explained by exposure to aggregate volatility risk, size, book-to-market, momentum, or liquidity.

Methods/Data:  The first part of the paper looks at stocks’ sensitivity to aggregate volatility risk.  The second and more interesting part concerns idiosyncratic volatility.  Data are NYSE stocks for the period 1963-2000.

  • Aggregate Volatility
    • Create 5 portfolios, and measure their “beta_vix” as the sensitivity of their returns to changes in the VXO (the paper calls it “VIX,” after the newer volatility index that replaced the VXO in 2003).
    • The VIX is very highly autocorrelated–0.94 at the daily frequency–so the authors’ assumption that daily changes in the VIX proxy for shocks to volatility is probably justified.
    • Use beta_vix from month t-1 to predict returns in month t.
  • Idiosyncratic Volatility
    • Measure i.vol. as the standard deviation of the residuals on a Fama-French 3-Factor model.
    • Compare returns of volatility- and size-ranked portfolios.

Results:

  • High sensitivity to aggregate volatility is related to lower earnings, since a stock’s high volatility is a hedge against market volatility.  The stock becomes volatile at the same time the broader market does, making the stock less likely to fall or rise simultaneously with the market.
  • The aggregate volatility results are robust to controlling for liquidity, volume, and momentum, but not to time period.  The effect disappears if volatility from month t-2 is used to predict month t returns, or if month t-1 volatility is used to predict t+1 returns.
  • High idiosyncratic volatility means lower returns.  This result is robust to controls for size, book-to-market, leverage, liquidity risk, volume, share turnover, bid-ask spread, coskewness, dispersion of analyst forecasts, momentum, aggregate volatility risk,  and–unlike the aggregate volatility effect–to different time periods.
    • volatility in month t-1 explains returns in month t+1.
    • volatility for month t-1 explains returns for months 2-12.
    • volatility for months t-12 to t-1 explain returns in month t+1.
    • volatility for months t-12 to t-1 explain returns for months 2-12.
    • The effect is present in every decade of the sample period, and are stronger in the more recent half of the full period.
    • The effect is also significant both in periods of high aggregate volatility and in stable periods, in periods of recession and expansion, and in bull and bear markets.
  • Authors cannot rule out the Peso problem.
    • The Peso Problem comes from a study testing the efficient markets hypothesis in the Mexican stock market.  The data rejected market efficiency, the authors believed, due to investors expectation of a coming devaluation of the Peso.  The data ended in June without any devaluation observed, and the Peso was devalued two months later in August.  The Peso problem can be stated as the latent (leading or lagged) of something just outside the data window that affects statistical inference.