The standard error of the ratio of two regression coefficients

One question which I find occurs repeatedly around the department is on finding confidence intervals for various transformations of coefficients from a regression. One of the most common, (and problematic), transformation asked about is the ratio of two regression coefficients. This is, in fact, a very old problem, and, as with many things in regression, exact form solutions are impossible without resort to either the bootstrap or assumptions of normality.

If one is willing to assume normality, then I believe the following logic should apply.
Consider the standard regression set up:


Written more compactly as y=X\beta+epsilon , if \epsilon_i \sim N(0,sigma^2), then we have following the standard regression derivations, that

\hat\beta \sim N(0,\sigma^2_{\epsilon}(X'X)^{-1})

Let us suppose our ratio of interest is \gamma=\frac{\beta_1}{\beta_2}, and let us suppose we are interested in knowing the the distribution of the plug-in estimator \hat\gamma=\frac{\hat\beta_1}{\hat\beta_2}
All we are asking here, as it turns out, is “what is the ratio of two standard normal random variables with known mean, variance, and correlation?” This question was solved in 1969 by David Hinkley, who would later go on to be an important contributor in the development of the bootstrap, in his appropriately named paper

Unsurprisingly, perhaps, given the complexity of the distribution, I have had little fortune finding actual implementations of the algorithm, barring some old C code from a retired medical researcher who solved the same problem in a different way at around the same time. Modern statisticians no longer need mathematics in the same way our forbearers did, and most practitioners to today would opt for computing the confidence intervals via monte carlo simulation. Here is a simple R function which I find performs the estimation quite well

It takes an object returned by lm() as an output as well as the name of the numerator and denominator variables of interest as its primary arguments.


RatioCI <- function(lmoutput,numerator,denominator,replications=1000, confidence.level=.95){
  #A simple monte-carlo algorithm for the CI of two regression coefficients. 
    #lmoutput: an object return from running an OLS regression with LM
    # numerator: the name of the numerator variable (as string, put "(Intercept)" for intercept) 
    #denominator: the name of the denominator variable (as string, put "(Intercept)" for intercept) 
    #replications:  The number of monte-carlo replications to be used in estimating the confidence interval
    #confidence.level: the size of the confidence interval to be computed (the computed interval should capture the true ratio confidence.level precent of the time)
    #CIout: an estimate of our confidence interval (numeric vector, two elements)
  variance1 <- vcov(lmoutput)[numerator,numerator]
  variance2 <- vcov(lmoutput)[denominator,denominator]
  covar <- vcov(lmoutput)[numerator,denominator]
  mean.plugin <- lmoutput[["coefficients"]][numerator] /  lmoutput[["coefficients"]][denominator] 
  Sigma.plugin <- matrix(c(variance1,covar,covar,variance2),nrow=2)
  distribution <-mvrnorm(replications, c(lmoutput[["coefficients"]][numerator], lmoutput[["coefficients"]][denominator] ), Sigma.plugin)
  ratio <- sort(distribution[,1] / distribution[,2]) <- c( floor( replications * (1 - confidence.level) / 2) + 1, ceiling( replications * (1 + confidence.level) / 2) )
  CIout <- ratio[ ]

As a quick consistency check, monte-carlo simulations run on a group of correlated independent variables show a 94.5% coverage probability for a 95% confidence interval with 1000 replications.


Education and Mathematics

One of my favorite debates is over how to teach effectively. As a disclaimer, I tried my hand at teaching for a year, and didn’t find it to my tastes. My comparative advantage does not lie in the classroom, but I also can’t help but feel economics is taught extremely badly in this country. What drove this home was a conversation I had with a senior undergraduate while I was a first year Ph.D. student. He was doing well in a class I had been assigned to help with, and wanted to pursue a PhD in his own right. I asked the first question that almost any professor would ask when discussing graduate study with a student “What is your mathematics background?”. He had taken one semester of calculus at a community college nearby and received an A, but not studied further.

The immediate implication of this that he could not go directly to a PhD, but would need to pursue a master’s degree first in order to develop his math skills sufficiently. This is hardly damning, but a master’s degree is expensive and cost another two years of life. It struck me at the time as profoundly unfair that someone could study economics for four years, and yet still be unable to interact with the ongoing academic debates. Like the Naive graduate student I was, I vowed to start integrating more mathematics into my course work: calculus, probability theory, formal proof, and the other bread-and-butter tools I used in my day-to-day work.

I was forced, rather immediately after I began teaching, to scale back my ambitions. Some of my students were taking differential equations, some seemed unsure as to how to multiply fractions. My institution is not selective, and I could not make my course mathematical without severely disadvantaging my less advanced students. I still teach more math than any other instructor, but is it justified? In a non-selective institution like mine, only a tiny minority of the students will go on to pursue PhD study. What about the others, does a deeper mathematical education help them? Hurt them?

This is an empirical question, and a somewhat simple one at that. If forcing more high-school students to take math is to their benefit, then re-inforcing those skills in my own classes is likely beneficial. We know that states have increased their compulsory mathematics requirements at different points in time, thus the challenge is merely one of tracking the students before and after mathematics requirements are raised and controlling statistically for other factors that may have occurred in the time period. In a relatively recent working paper, a public policy scholar (and economics PhD) at Harvard’s Kennedy School did just that:

Here’s the Working Paper

His results are quite striking. Increasing the amount of mathematics required for a high-school degree increases student lifetime earnings by for black students by 5%-9% per year of additional mathematics required. Not so for white students! Why? The most immediate answer is that mathematics requirements only have an impact on those students who are not likely to to take the classes without being forced to. For example, as shown in the paper, changing the mathematics requirements from two to three years has almost no effect on private school students, as almost all of those students were likely to take more than two years of mathematics. Black students in public schools are most likely to eschew mathematics unless forced to endure it, and so a law requiring them to study advanced mathematics changes their behavior the most.

My interpretation is this: forcing kids to struggle through mathematics does make them, on average, more successful and productive later in life. I, however, was probably biased going into my reading of the study, I invite other people to look and examine the results for yourself.


(As an addendum, a professor at my school had published a short paper in the American Economic Review about whether mathematics review makes students more successful in economics classes: the answer seems to be yes, but only in the short-run. Eventually the students without the review begin to re-learn the required skill set )