The Loss Function Has been Mislaid

McCloskey, Donald N., “The Loss Function Has Been Mislaid,” The American Economic Review Vol 75, No 2 (1985), 201-205.

  • Statistical significance is not the same as economic significance (most authors, most professors, and their audiences confuse this point.
  • Significance tell us how (un)likely it is that an observed coefficient is close to its actual value.
  • It is the scientist who must decide whether that estimated value is large enough to have intuitive/economic meaning.

Data mining: a reconsideration

Mayer, Thomas, “Data mining:  a reconsideration,” Journal of Economic Methodology 7:2 (2000), 183-194.

  • Data Mining
    • In the good sense, “data mining” means fitting multiple econometric specifications (in the simple case, multiple OLS regressions) to the data.  This is both reasonable and scientific.
    • In the bad sense, many economists implicitly equate data mining with running many regressions and then only reporting the one(s) that “work.”
    • It is important to report any results that are contrary to the hypotheses, even if they seem very unlikely.
  • Unbiased data mining means neglecting to report results only for the following reasons:
    • The results fail statistical diagnostic tests
    • Their statistical test results are inferior to those of the reported results
    • They support the reported results
    • They are obviously wrong (such as a significantly negative coefficient that collective experience says should be positive)
  • The only case where biased data mining (purposefully omitting all contrary results) is when the author is trying to show that a hypothesis might be correct.  In this case, the author’s intent should be clearly stated.
    • Contrary evidence can usually be found to all hypotheses and theories, so sometimes all we can do is show that we might be right.
  • Even unbiased data mining may be unacceptable if the researcher chooses diagnostic tests and/or significance cutoff levels with which his readers may not agree.
  • One possibility is for researchers to simply report more specifications, and for readers and referees to require them.

Let’s Take the Con Out of Econometrics

Leamer, Edward E., “Let’s Take the Con Out of Econometrics,” The American Economic Review, Vol 73, No 1 (1983),  31-43.

  • statistical inference is not a precise laboratory-style science (parable of farmers, birds, and shade)
    • econometricians can interpret data, but cannot usually perform controlled experiments
    • even with randomly selected samples, the bias of the estimators can be thought small, but it cannot be safely assumed to be zero
    • the uncertainty surrounding sample selection falls as the sample size increases
    • the uncertainty surround model misspecification does not fall with increased sample size, and cannot be inferred from the data
      • One way to decrease this uncertainty might be to collect data from two separate [non]experiments whose biases are independently distributed.  This will result in a bias that is the average of the individual experiments, and only half the misspecification uncertainty.
  • Only a model with infinite variables and infinite data is beyond all scrutiny
    • For any data set, there is an infinite number of polynomial equations that can fit the data points equally well.
    • For any experiment or nonexperiment, an infinite number of variables could plausibly effect the observed outcome (generating substantial degrees of freedom problems).
    • For a model with unlimited parameters, a finite data set can suggest infinite parameter values, each fitting the data to a different degree and appearing more or less believable.
  • Prior assumptions are the key
    • All inferences rely on assumptions formed before looking at the data.
    • It is best to use assumptions that are generally accepted, that are convenient, and that generate the same results as all other assumptions in their (broad) class of assumptions.
  • The Horizon Problem
    • Starting with a model and then adjusting the horizon until the model fits is a problem.
    • Starting with a data and then inferring a model is a problem because it is impossible to tell whether the data validates the data-inspired model.
    • Start with a model, determine beforehand the horizon that will be sufficient to validate it, and limit (but do not rule out) adding variable ex post.
  • Conclusions
    • Accept that all inferences rely on assumptions about which variables to include, how to collect the data, etc.
    • Make assumptions beforehand, and then show how the sensitivity of results to those assumptions is very low.

Using Daily Stock Returns: The Case of Event Studies

Brown, Stephen J., and Jerold B. Warner, 1985, “Using Daily Stock Returns: The Case of Event Studies,” Journal of Financial Economics 14 (1995), 3-31.

Purpose:  To observe the statistical properties of daily stock data, and to explain what effect these properties can have on firm-specific event studies.

Motivation:  Event studies are commonly undertaken using daily data.  This paper examines issues with daily data and whether anything ought to be done to address them.

Findings:  There are several potential issues with using daily data:

  • The central limit theorem suggests the cross section of returns should be normally distributed, but the evidence shows a fat-tailed distribution
  • A security’s return and the return on a market index are not always measured over identical intervals
    • This non-synchronous trading makes the OLS estimate of β biased and inconsistent.
    • This means observations can be serially dependent, complicating estimates of variance
    • This can also affect the calculation of the mean excess return
  • Estimating variance is further complicated by cross-sectional dependence between observations, and by evidence that variance increases in the days surrounding certain events like earnings announcements
  • Autocorrelation between returns is small but statistically significant

Data/Methods:  Samples of varying sample size are randomly selected from a pool of securities, and “event” dates are randomly assigned to the securities.  Expected returns for each security are estimated using the security returns over the period (-244,-6), and excess returns are calculated on day 0 (the event date).  All data and event dates are for the period 1962-1979.  Parameter distributions for different sample sizes are examined.

Abnormal returns are “imposed” by adding a constant (e.g. 0.02 for 2%) to actual security returns on their event dates.  The frequency that the null hypothesis of no abnormal return is rejected varies depending on sample size and on whether and how large an abnormal return has been imposed.  This technique analyzes the power of various estimation methods.

Conclusions:

  • Methods using the OLS market model are good enough in most cases
  • The potential issues with daily data, though, are sometimes worth confronting
    • When the variance increases around an event date
    • When autocorrelation is especially high
  • Daily excess returns are non-normal, but mean excess returns converge to normality as sample size increases
  • Non-OLS methods do not improve the frequency of detecting abnormal returns under non-synchronous trading
  • Tests that assume cross-sectional dependence are less powerful and no better specified