Communicate with your References

René Stulz gave the following advice to the Ph.D. students last week at a class meeting:

” When you public a paper on SSRN, also email all the [important] references and say, ‘Here is a new paper on SSRN that cites your work.  I want to be sure I cited you correctly, and also to give you a chance to make any comments.'”

René said he has contacted references (and potential future referees) for every one of his papers since he began his career in the early 1980s.

He also suggested waiting to post a paper on SSRN until it is very polished.  Often, referees will base their judgment not on the most recent draft they’re looking at, but at the original SSRN draft that they remember.


Job Market Interview Questions

David 1Rene did mock interviews yesterday with three of our current job market candidates.  Here is a list of a few of the questions he asked:

  • Tell me a little about yourself.
  • Tell me about your job market paper.
  • What do you want to teach?
  • What do you want to work on after you graduate?
  • What is the most important paper [in your specific field]?

Some general advice:

  • Often, the next question will be based on your previous answer.  Try to carefully steer the conversation away from things you don’t want to talk about.
  • Don’t sound canned, but have short 1-2 minute answers prepared for as many potential questions as you can.

The Mystery of Zero-Leverage Firms

Strebulaev and Yang (working paper, 2013)

A large percentage of publicly-owned U.S. firms (14% in 2000) have zero or almost-zero leverage, and this phenomenon is not confined to just a few years.  Zero leverage as a corporate policy appears to be persistent, and is not explained either by industry or by firm size.

In addition, many zero-leverage firms pay dividends, so it is not the case that this is driven by growth firms choosing zero leverage to avoid paying out earnings.

Compared to similar firms matched on size and industry, zero-leverage firms that pay dividends:

  • pay higher dividends
  • have higher cash balances

One potential explanation is an agency story where the manager prefers zero leverage, even if the shareholders may not.  This story finds support in the empirical findings that zero leverage is more likely in

  • firms with higher CEO ownership
  • firms with less independent or more CEO-friendly boards
  • family-owned firms.


Corporate Resilience to Banking Crises: The Roles of Trust and Trade Credit

Ross Levine, Chen Lin, and Wensi Xie (2016 NBER working paper)


  • Regress outcome on trust, crisis, and trust*crisis (difference-in-differences)
    • outcome ∈ {use of trade credit, profitability, employment}
    • trust is the extent to which people in a country trust each other.
      • Following La Porta, Lopez-de-Silanes, Shleifer, and Vishny (1997a) and Guiso, Sapienza, and Zingales (2008), they use responses to the World Value Survey, in which participants are asked whether they trust other people.
    • crisis equals 1 for the year in which a banking crisis starts and for each of the two following years (follow Laeven and Valencia (2012) in dating crises).
    • Data covers 3,600 manufacturing firms in 34 countries from 1990-2011, and comes from Worldscope.
  • Examine whether results are stronger in industries that rely more on short-term liquidity.
    • Define industry short-term liquidity needs as the ratio of inventory to sales, using U.S. firms only (data from Compustat).
      • the ratio of inventory to sales is supposed to capture the proportion of working capital that is financed by ongoing sales.
    • control for severity of crisis, maturity of country financial markets, macro-economic conditions, quality of government institutions, legal shareholder and creditor protections, etc.


  • Firms’ outcomes are better (firms are more resilient) in high-trust countries, since they can rely on informal trade credit financing when the banking channel is in crisis.  In another words, firms who have good relationships with their suppliers (and maybe customers) can rely on them in bad times.
  • The effect is stronger for firms whose operations imply greater short-term liquidity needs.

Production Chains

David K. Levine (2012 Review of Economic Dynamics)


  • In economies with greater specialization of agents, production chains are necessarily longer.
  • If shocks to (failures of) agents are randomly distributed, the longer chains have a greater probability of failure.
  • If shocks are correlated, the existence of chains where no agents fail is more likely, and so chains will be longer; however, these longer chains are more sensitive to changes in the probability of the failure of any single agent.
  • Shocks that are concentrated within production chains can be less costly than shocks that hit multiple chains, even if their first-order impacts are identical.
    • Consider an economy with two production chains, composed of equal sized firms. Let a shock be an instance where a firm fails, causing its production chain to shut down.  A shock that hits two firms in production chain 1 (and shuts it down) is less costly than a shock that hits one firm in each chain (shutting down both chains).
    • In an economy with 3 chains, a shocks that hits two chains can be transferred so that it affects only chain if the chains’ inputs are substitutes (see Table 1)
      • There are three auto manufacturing chains, using three specialized firms each that produce tires, pistons, and other parts.
      • Shock A hits all three firms in the production chain for Jaguars.  This chain shuts down, but the chains producing BMWs and Toyotas still operate.
      • Shock B hits the tire producer for Jaguar, the piston producer for BMW, and the other parts producer for Toyota.  However, if these parts can be substituted across firms, then Jaguar’s piston and other parts producers can be reassigned to the BMW and Toyota chains so that, as with shock A, only one chain shuts down.
      • Shock C hits all three tire producers.  All three chains shut down.
  • The author models the correlation of shocks as the probability r≥0 that a firm is in a chain where all firms are failures.  Given that a firm is not in such a chain, it fails with independent probability p.


  1. With specialization, correlation of shocks in production chains leads to higher expected output, higher welfare, longer production chains, and greater sensitivity to shocks.
    1. This greater sensitivity is the “price we pay” for the higher productivity.
  2. Correlation of shocks within production chains is less costly than correlation of shocks across chains, especially when inputs in one process can be substitutes for inputs in another.

Words of Wisdom – Rene Stulz

René Stulz holds a seminar for Ph.D. in their third-or-higher year of studies, in which students present their research to one another and give/receive feedback.  In our first meeting of the 2016-17 year, he gave the following counsel:

  • By November of year 5, you should have several papers ready to share, with one of them polished to a very high level.  But never write bad papers just to increase your count.
  • When you go on the job market, people want to see:
    • skills
    • enthusiasm for your paper and for the profession – show that your interest goes beyond your job market paper
    • at least two solo-authored papers
    • at least one co-authored paper
  • Counsel for third-year students:
    • You don’t need a perfect idea to start a paper, otherwise you’d never do anything.
    • Start with an idea, and improve the idea as you work.
    • That being said, read a lot. “The worst thing you can do is to go and start writing a paper tomorrow.” You need to know how your idea fits into the literature and makes a meaningful contribution.
    • Stay up to speed on material from your previous classes, especially the finance classes.

Words of Wisdom – Shai Bernstein

Shai Bernstein from Stanford visited OSU last semester, and offered some advice in his meeting with the Ph.D. students.

  • Carefully document the questions you are asked during your presentations, and the success of your answers.
  • People don’t want to hear a lot of details and motivation in your job interview – they want to hear a simple, clear overview of 2-3 points that you want them to remember.  Your career will succeed along similar lines. The papers that are remembered are clear, powerful, and memorable.  Other papers float around and eventually fall through the cracks.

How Stable Are Corporate Capital Structures?

Harry DeAngelo and Richard Roll (2008 JF)


  • Lemmon, Roberts, and Zender (2008) and others make the argument that the cross-section of corporate capital structure is quite stable over long horizons.  LRZ show that, among a selection of determinants that are believed to be linked to capital structure, firm fixed effects are by far the most powerful predictor.  They also sort firms into leverage quartiles, and show that the high- (low-) leverage portfolios at time t also have the highest (lowest) leverage as much as 20 years into the future.
  • One interpretation, bluntly stated, is that 50 years of capital structure research has been barking up the wrong tree.  Researchers should go “back to the beginning” and rethink their approach.
  • Another interpretation of these findings is that the cross-section is stable over time, and therefore not interesting.  Researchers should confine themselves to explaining within-time variation in capital structure.
  • DeAngelo and Roll (this paper) present compelling arguments that (1) LRZ’s analysis masks variation of the cross-section over time, and so is somewhat biased toward finding stability and (2) capital structure at the firm level is wildly unstable, and the evolution of the cross-section is at least as interesting a research topic as the snapshot at each point in time.

Methods and Findings

  • The sample consists of 15,096 CRSP/Compustat firms over 1950-2008.
  • The authors test for firm-level stability by measuring the length of each firm’s “stable leverage regimes” – i.e. a period of time where the firm’s leverage stays within a narrow band of 0.05, 0.1, or 0.2.
    • Among firms that are in the sample for at least 20 years, only 20% have stable (using the bandwidth definition of 0.05) leverage for a ten-year period, only 4% have stable leverage for a 20-year period, and the median length of firms’ “longest stable regime” is only 6 years.
    • Among firms that are in the sample for the entire 59-year period, 52% have stable leverage (again, bandwidth of 0.05) for some 10-year period, 0% have stable leverage for a 40-year period, and the median length of “longest stable regime” is only 10 years.
  • They then show that, during firm’s stable regimes, leverage tends to be quite low, often less than 0.1.
  • A significant portion of the paper is dedicated to explaining why the results in Lemmon, Roberts, and Zender (2008) and MacKay and Phillips (2005) are misleading.
    • Use a creative specification that allows firm-time fixed effects with firm-time observations.  A textbook application of this would result in one fixed effect for each observation and tells the researcher absolutely nothing.
      • Following Scheffé (1959), they get around this by imposing some additional structure. They assume that firm-time interaction effects are stable over longer periods – they arbitrarily choose 10 years – so that they run regressions with firm-decade fixed effects.
        • Using firm-decade interactions significantly improves R-squared over a specification with firm fixed effects only.
        • ANOVA reveals that
    • DeAngelo and Roll (this paper) use a longer sample than the papers they criticize.  This is important, because if firm leverage changes slowly (this can be represented by adjustment costs to leverage), then the power of firm fixed effects will be overstated in short samples.  The problem is similar if many firms only appear in the sample for a few years.
      • This is also important because, as these authors argue, the LRZ sample begins in 1970 – after economy-wide increases in leverage as firms took advantage of post-war investment opportunities.  Thus, the later sample misses crucial time-series variation in leverage.
  • In another set of analyses, DeAngelo and Roll show that the cross-section varies meaningfully over time.
    • They measure the correlation between several pairs of cross-sections – the pairs {t,t+1}, {t,t+2},…,{t,+40}.
      • The correlation falls quickly from 0.8 for {t,t+1} and approaches zero.
    • They show that firms in one leverage quartile at time t  moves across quartiles a lot in the ensuing years.  LRZ miss this because they look at average leverage of quartile groups.
  • The final part of the paper tests various theories of capital structure: (1) Miller’s (1977) random variation, (2) speed-of-adjustment (SOA) models, (3) flexible target ratio models, and (4) time-varying target (TVT) models.
    • They simulate a model that nests all these as special cases, and see which model(s) appear to best fit the data.
    • The TVT and flexible-target models seem to be the best, while Miller’s random leverage model is not supported.

The authors’ primary conclusion is that the cross-section is far from stable, and that within-firm variation of leverage over the time series is probably a response to the firm’s investment needs.


  • I liked this paper.  It documents new facts about the variability of firm-level debt ratios over time, and uses creative analysis.
  • The paper also leaves many questions unanswered.  It doesn’t explain why firms have such low leverage during stable regimes (though DeAngelo, Stulz, and Gonçalves currently have a working paper that tries to answer this question).  It doesn’t explain why, if macro factors (post-war environment) are so important, the cross section still changes so much from year to year.
  • Sample-selection bias is an accusation commonly made to papers whose data start after the post-war period.  David, Fama, and French (2000) made a similar criticism of Daniel and Titman’s (1997) argument that firm characteristics matter and risk-factor loadings don’t.  But it is not clear that the post-war decades are relevant for all analyses.
    • Fama’s and French’s three-factor asset pricing model seeks to explain only cross-sectional difference in returns, and is not concerned with how the cross-section changes over time.  High-return stocks in any period have different risk-loadings, but the risk loadings are allowed to change over time in an unspecified manner.  In this case, extending the sample backwards should be safe.
    • But this paper is concerned with how the cross-section varies over time.  Firms’ “wholesale abandonment of conservative capital structure” in the 1950s and 1960s, as they took advantage of (potentially very rare) investment opportunities may not tell us much about how they have managed their debt ratios in the last 40 years.

Why Does Capital No Longer Flow to the Industries with the Best Growth Opportunities?

Dong Lee, Han Shin, and René Stulz, 2016 working paper

Industries with the highest average Tobin’s q get more net funding from investors (both debt and equity investors), consistent with properly-functioning capital markets, until the mid-1990s.  Since that time, the industries with the highest q receive less net funding.  There does not appear to have been any breakdown in the efficiency of corporate debt markets.  The findings are driven by high-q industries, which have reduced investment and increased share buybacks.

Methods and Findings:

  • Use the Fama-French 48 industries
  • drop financial and utility firms, as well as regulated industries (per Barclay and Smith (1995))
  • calculate Tobin’s q as the ratio of the market value of industry assets to the book value of industry assets
    • numerator: AT-CEQ-TXDITC + (PRC*SHROUT)
      • assets minus common equity minus deferred tax credit + market cap
    • denominator: AT (total assets)
  • Measure industry funding rate as the sum of net debt issuance (long-term debt) and net equity issuance
  • Firms in the top-funding quintile should have higher q (they don’t)
    • double check that high-funding industries are high-investment industries
  • Measure the cross-sectional correlation between funding rate and q
    • the correlation is mostly positive before 1995, and mostly negative after
  • Examine whether the q-differential between firms assigned to the lowest- and high-funding quintiles at time t=0 disappears over the future, consistent with limited investment opportunities and efficient markets
    • The q of industries in the low-funding quintile converges to the q of industries assigned to the highest-funding quintile over the next 1-5 years.
    • The q of high-funding industries, however, does not fall
  • Compare the high- and low-funding industries along three measures of growth
    • investment (capital expenditures) – high-funding industries also have higher investment, but this difference falls over the five years following assignment to funding quintiles
    • change in number of firms – high-funding industries see greater growth in the number of firms
    • growth in assets – high-funding industries see greater asset growth over the five years following assignment to funding quintiles
      • However, it is not the case that the low-funding indutries are simply financially constrained, since they have higher dividend payout rates at the time of quintile assignment.
  • Regress funding rate on q and cash flow
    • the coefficient on q is significant for the sample period ending in 1996, but insignificant both for the post-1996 sample and for the whole 1971-2014 sample.


  • This was an interesting read, and it addresses a fundamental economic question that perhaps not enough people are talking about:  do capital markets (still) work?
  • Since about the year 2000, firms have been returning funds to investors in the aggregate.  This matches with an essay I read just this morning by Minneapolis Fed president Neel Kashkari, that cites lack of innovation as a possible explanation for the U.S. (and the world) economy’s anemic recovery from the last crisis.  If businesses no longer have anything important to work on, they shouldn’t invest.  Lee, Shin, and Stulz (this paper) find that it is high-q firms with high cash flows that are returning money through stock repurchases.
  • The paper is long on puzzles and short on solutions, but makes an effort to direct the path for future research.
  • This paper relies heavily on a measure of Tobin’s q, standard in corporate finance, which is, roughly, enterprise value divided by book assets.
    • AT-CEQ-TXDITC is supposed to measure the book value of long-term debt.
    • This is not a good measure, and everybody knows it, and everybody still keeps using it!
      • What about a company with home-grown intellectual property?  Consider a firm with one asset – a patent on a new drug.  No buildings, no equipment, not even a stapler – just a patent.  The firm has a market value of $1 million.  The firm’s Tobin’s q, according to the standard measure, is infinity.  Should the firm keep investing?  Now suppose the patent cost $10 million in R&D expense to develop.  Was it worth it? Should the firm keep investing?
      • What about cases where (conservative) accounting depreciation and economic depreciation don’t line up? Consider a firm whose only assets are a plot of land purchased in 1900 for $10,000 and a warehouse built in 1975 (>40 years ago), for total book assets of $10,000.  Now consider another firm with an identical plot of land purchased in 2000 for $1 million, and a warehouse that serves the same purpose but was built in 2010, for total book assets of >$1 million.  Which firm has higher q, by the standard measure?  Which firm has the best investing opportunities?
  • Now, maybe the examples contrived above are too far from reality to be useful. Maybe conservative accounting valuation of assets is just as good today as ever.  But I’m not convinced.  I think today’s economy relies more on assets that are likely to have zero or low book value than in the past.
  • Even when some of these home-grown intangibles are sold and thereby acquire a book value, the most likely scenario is one where the company owning the assets is acquired, and the companies’ investment bankers use comparable transactions to try and assign a price tag to such assets as “trademark,” “customer loyalty,” “research database,” etc.  This is not a neat process, and intuitively should be even harder in a service-based economy than in the manufacturing economy that prevailed in prior decades.


  • Is the ratio of goodwill to book value higher now than in the past?  This would be consistent with conservative accountants systematically undervaluing acquired assets and with this undervaluing getting worse.  However, it would also be consistent with the value of private control benefits and, hence, with deteriorating corporate governance.  This is probably not the case, but would not be too difficult to analyze.
  • How to the “high-q” and “low-q” industries compare on R&D, on advertising expenses, on customer loyalty?
  • Finally, why do the analysis at the industry level?  Are all firms in industry 11 (Healthcare Services) or in industry 35 (Computers) supposed to have approximately similar, or even tightly correlated, investment opportunities?  Taking industry averages masks potentially large intra-industry heterogeneity.  It would be interesting to see if the high-q industries are pulled up by outliers, or vice-versa.

Takeaway: If you use the traditional measure of Tobin’s q, equity markets no longer allocate capital to the most efficient industries.  This measure of Tobin’s q is potentially problematic, and I see this paper as much as an indictment of the measure of q as of the efficiency of the equity markets.