Does a player’s “course history” predict performance?

A much-debated topic among golf fans is the relevance of so-called “course history” to a player’s performance in a given week. That is, do specific players tend to play well on specific courses?

Of course, there are intuitive reasons we can come up with to explain why this should, logically, be true. First, the characteristics of certain courses (e.g. length, fairway width, rough length, etc.) should favor players with certain characteristics (e.g. power, accuracy, etc.). Second, golf has a mental component to it; if a player develops a certain level of comfort (or, discomfort) with a given course layout, it makes sense that this higher (or, lower) comfort level will impact their performance at that course.

But, talk is cheap. I can also give you intuitive reasons as to why some players play better on Bermuda greens, or why some players play better when wearing white belts than when wearing black belts. These are theories, and for theories to gain credibility, you need to provide some empirical evidence that corroborates their predictions.

The mere existence of certain players that have a string of good performances at the same course is not, necessarily, strong evidence for the existence of course-player effects. It is true that Luke Donald has played unusually well (compared to his typical performance level) at Harbour Town. This is simply a fact, and can’t be disputed. However, did you know that Henrik Stenson had a great course history at Bay Hill, but played awful in 2017? It is easy to focus on the former point, and overlook the latter. The reason why Luke Donald playing well at Harbour Town doesn’t provide indisputable evidence for the course history hypothesis is that it is not based off a large enough sample of rounds (and yes, 25 rounds is still a small sample, especially in golf). Suppose there really are no course-player performance effects; unless everyone plays a very large number of rounds at each course, it would be astonishing if we didn’t find evidence of some golfers playing better, or worse, than usual at specific courses. The logic here is the same as if we had 300 people flip a coin 10 times; some people will get 8-10 Heads, or 8-10 Tails, simply due to the statistical variation inherent to finite samples.

More generally, finding differences among golfers with respect to some statistic can be thought of as a necessary first step to finding a meaningful metric for predicting golf scores. It’s true that if course history, or performance on Bermuda greens, is going to be a successful predictor of player performance we need to first confirm that there exists substantial variation in the statistic (i.e. if we don’t find any variation in a player’s course-specific scoring averages, then clearly it can’t have any predictive power). But, the next, critical, step is to show that this statistic actually predicts scores to some degree. People analyzing sports data love to do this first step (because it’s easy), but the second step isn’t done very often. So next time you see a list of players ranked by a statistic, the first question should be “Is there any evidence that this helps to predict scores?”.

In this article we are going to examine how well a player’s course history predicts their performance. Along the way, we’ll explore how to best predict golf scores, in general. The hope is that the evidence here can be taken as free of any personal bias from us (full disclosure: as anybody who follows us on Twitter knows, we have been on the “course-history is irrelevant” side of this debate).

Let’s get started. First, we want readers who haven’t analyzed golf data before to appreciate how much *random* variation exists in golf scores on the PGA Tour. Below, we’ve plotted the adjusted strokes-gained of two players on Tour from 2012-present. These scores are adjusted for course difficulty, so any remaining differences reflect only differences in golfer performance; that is, an adjusted score from the U.S. Open can be directly compared to an adjusted score from the Sony Open. (Also, from here on, when I use the phrase “raw scores”, or “strokes-gained”, I am referring to this adjusted measure of scores as just defined. See footnote 1 for a primer on how this adjustment works.)

The players in the plot are Dustin Johnson and another player who we’ll keep unnamed for a moment; take a guess at the (average) world rank of this other player during this period.

Notes: Plotted here are event-level averages; round-level data would show even greater variation. Data is from 2012-present. Positive values indicate better performances.

The unnamed player’s scores plotted here belong to Kevin Na; he has been solid in this period, having an average world rank of around 50-60th or so. However, when you think of Dustin Johnson and Kevin Na, you likely imagine a wide gap between them with respect to their ability levels. But, with only a quick glance at the graph, it’s not immediately obvious who the better player even is! This is an attempt to highlight the fact that the scores of any individual golfer vary a lot.

Next, we add in our best estimates of Dustin Johnson’s and Kevin Na’s “ability” before each tournament (i.e. the score we expect them to shoot at each point in time – estimated from our model) throughout the time period:

Notes: Data points represent event-level average score. “Ability” is defined here, loosely speaking, as a weighted average of various historical scoring averages (2-year, 2-month, last event). Data is from 2012-present.

When you see the plots of their respective predicted abilities, it does become clear that Dustin Johnson has been the better player. Near the end of the sample period, DJ’s ability is estimated to be about 1 stroke per round better than Na; this is actually quite a big difference (as it compares to the typical difference in our measure of ability between PGA Tour players). However, when you see it plotted alongside their raw scores, this difference looks like small peanuts compared to the weekly (*random*) variation in an individual player’s scores. This is probably a good time to mention that we are only able to explain (or, successfully predict) about 15% of the variation in golf scores; the rest is unaccounted for! (If instead we were trying to predict round-level scores, this number drops to about 7-8%.)

Moving forward; let’s do one more quick exercise before we get to the analysis of course history. In the graph below we plot a few different scoring averages calculated over different historical time horizons. The goal here is to evaluate different ways of predicting a player’s scores. Graphically, we’ll just focus on Dustin Johnson’s data so things aren’t too crowded:

Notes: “2Y prediction” is plotting DJ’s strokes-gained average over the previous 2 years (from the date of each event), “2M prediction” is plotting his strokes-gained average over the previous 2 months, “Last event prediction” is his strokes-gained average in his most recent event, and finally, “Weighted prediction” is a weighted average of 2-year S.G., 2-month S.G., and last event S.G; the *weights* are just the coefficients from a linear regression (using all the data, not just Johnson’s).

So, what’s predicting best? Let’s calculate the average absolute deviation of our various predictions from the realized scores. To do this, we take the absolute value of the difference between every score and every prediction, and then average these. Here’s how the predictions did (this is for the entire sample, not just Johnson’s data):

What method predicts best? Average prediction errors:

  • 2Y prediction: 1.41 strokes
  • 2M prediction: 1.52 strokes
  • Last Event prediction: 1.86 strokes
  • Weighted prediction: 1.39 strokes

(Again, recall that this is all done with event-level averages.) The two main takeaways here are: 1) All the predictions do pretty poorly; the best we can do is miss a player’s average score at an event by 1.4 strokes (that is, this is our average prediction error); and 2) The 2-year strokes-gained prediction does almost as well as the optimal (i.e. “Weighted”) prediction method!

Now, finally to the discussion of the relevance of course history. Up to this point, we have been predicting scores without using any course-player specific variables. The goal is to see whether adding in a player’s course history helps to predict their performance in a given week. So, what should we use as our measure of course history? Evidently, a course history variable defined as the average of a player’s raw scores at a course would be problematic, as this will be correlated with the general ability of the player. That is, at Augusta National, Dustin Johnson will likely have a better historical scoring average than Kevin Na, but that may be simply due to the fact that Johnson is typically better than Na at all courses, and not due to unusually good performances on the part of DJ at Augusta. Therefore, we first need to adjust scores for the ability level of the player at each point in time; we’ll call this the residual score. The residual score is how much better, or worse, a player played in each round compared to their ability level at the time. (See footnote 1 to see how we estimate each player’s ability; if you don’t want to read it, you can basically think of the “Weighted prediction” above as the player’s ability at any point in time. Then, the residual score is equal to the raw score minus this prediction.) Our course history variable is going to be defined as a player’s historical average residual score at the relevant course.

In words, we are asking: “Does the fact that Luke Donald has typically played better than expected at Harbour Town from 2010-2015 mean he will play better than expected at Harbour Town in 2016?” 

This is quite a nice approach, because even though Donald’s ability level has dropped off in recent years, we are only looking to see whether he plays better than what we’ve estimated his current form to be. So, Donald may, in terms of raw scores, play worse than he has in the past at Harbour Town in 2016, but if this is still above his current ability level then that would be evidence in favor of the course history hypothesis. (*Only for those who are interested* – for a discussion of why this approach is slightly different than controlling for current ability in a multi-variable regression, see Footnote 2).

Some final details: the estimating data is PGA Tour rounds from 2010-2017. We include all players who played at least 70 rounds in this time period (otherwise we are just bringing in a lot of unnecessary noise with players who’ve only played a few rounds). We predict event-level (or, event*course-level at events with multiple courses) performances using the years 2015-2017. The reason for that is we need to have enough historical years to construct meaningful course-specific scoring averages. To be clear, we predict 2015 scores using 2010-2014 course-specific averages, 2016 scores using 2010-2015 course-specific averages, etc.

The following simple regression is run:

\( Residual.score_{i} = \beta_{0} + \beta_{1} \cdot Historical.avg.residual.score_{i} + u_{i} \)

where the regressor is the player’s historical average residual score at the relevant course, and the dependent variable is the player’s average residual score in the current week (or his average at each course in the current week if it’s a multi-course event).

Here is the main result:

Notes: Historical course-specific averages are calculated from 2010 up to year of interest. Dependent variable is current week’s average score. All scores have been adjusted for a player’s current form (i.e. they reflect how much better or worse a player performed than expected). Regression is using data from 2015-2017; sample is restricted to those with at least 15 rounds in their course history.

The slope of the regression line is 0.12 – this means that for every 1 stroke increase in a player’s course history (i.e. his course-specific historical average of residual scores) his expected score increases by 0.12 strokes. Importantly, this graph is constructed only using players with course histories comprised of at least 15 rounds (this leaves ~ 2000 observations). As can be seen from the plot, course history is providing a very noisy signal; there are plenty of players who had good course histories (i.e. further right on the x-axis) but play very poorly in the current week, and vice versa. Of course, on the whole, having a better course history correlates slightly with better performance that week (as evidenced by the upward sloping regression line). For those interested, the estimated slope has a standard error of about 0.05 – so, pretty noisy.

In the full sample (i.e. no restriction on minimum number of rounds played at the course, other than it being greater than zero), course history has basically no impact on expected score: a 1 stroke increase in course history increases the predicted score by about 0.02 strokes. However, with the full sample, there are many observations in which a player only has 2-4 rounds to construct a course history; this adds a lot of statistical noise. Perhaps unsurprisingly, the estimate of the course history effect gets larger as the round cutoff is made more strict, culminating with the result shown in the plot above (a 1 stroke increase in course history average is associated with a 0.12 stroke increase in expected performance). We could keep making the minimum round cutoff stricter, but eventually the sample becomes too small for reliable inference. For a reference point, the coefficient on short-term (say, the previous 2-3 months) from a similar regression would be about 0.15, and the coefficient on long-term form (2 years) would be about 0.75 – 0.80.

In terms of predictive power (i.e the “R-squared” of a regression), course history has very little. Recall that before we were able to predict about 15% of the variation in scores at the event-level (i.e. R-squared equals 0.15). The R-squared of the course history regressions range from 0.02% (!!) (full sample) to 0.2% (restricting to course histories with at least 15 rounds). The R-squared is only a function of two things: 1) the size of the coefficient, and 2) the variance of the course history variable. There is a decent amount of variation in course histories across players, so the reason the R-squared is so small is mainly just due to the small coefficient size. (See footnote 3 for a short discussion on this.)

To conclude, in this article we’ve shown that long-term form is king when it comes to predicting golf scores. However, short-term form does provide a slight improvement in predictive power. Course history, defined here as how much better than expected a player has historically played at a course, is found to impact performance to some degree: we estimate that increasing the course history measure by 1 stroke increases our predicted score by at least 0.02 strokes, and by at most 0.12 strokes (the former using all course history data, the latter obtained only using course histories calculated from at least 15 rounds). But, despite the somewhat meaningful impact course history has on predictions (0.12 strokes is meaningful, in our opinion), it adds virtually no predictive power (as evidenced by an extremely low R-squared). Moving forward, we will keep course history in mind when modelling golf scores, but it trails far behind long-term form, and to a lesser degree short-term form, in its relevance to predicting golfer performance.

Footnotes:

1. We use a slightly different (and, better) method to properly adjust for course difficulty and to estimate player ability than we have in previous work. We roughly follow the method used in Connolly and Rendleman (2008). The naive way to adjust for course difficulty of any given round is to subtract the mean score for the field that day. This can lead to erroneous conclusions about course difficulty, however, because not all fields are the same in terms of average skill level. Subtracting off the mean will tend to overvalue rounds played against weaker fields, and undervalue rounds played against stronger fields. To account for field strength, we have in the past estimated a fixed effects regression of the following form:

\( Score_{ij} = \mu_{i} + \delta_{j} + \epsilon_{ij} \)

where \( \mu_{i} \) represents a fixed player skill level for player i, and \( \delta_{j} \) represents the course difficulty for a given round j. We augment this specification by allowing \( \mu_{i} \) to vary over “golf time” (this is the chronological sequence of rounds the golfer plays). Consider the following:

\( Score_{ij} = \mu_{i}(t) + \delta_{j} + \epsilon_{ij} \)

where \( \mu_{i}(t) \) is now a time-varying measure of player ability (where time is specific to each player, and represents their sequence of rounds). We estimate this in a cool way using an iterative process, the basic idea is outlined in the Rendleman article linked above. The bottom line is that we allow each player’s ability to vary over time (whereas before, it was forced to be fixed over time). This is especially important because our estimating sample spans 9 years (with just a year or two of data, the fixed ability assumption is probably not unreasonable). Recall that in other parts of this article, player ability was defined as the weighted average of 2-year, 2-month, and last event scoring averages. The ability measure here is preferable because it uses data both before and after each point in time to estimate player ability (whereas the other method clearly just uses historical data – which is obviously all you have when you are doing a prediction exercise!).

From this, our adjusted score variable is defined as \( Score_{ij} – \delta_{j} \), and the residual score variable is defined as \( \epsilon_{ij} \).

2. An obvious way to approach this problem would have been to run a regression where you control for a player’s current form using various historical averages (e.g. 2-year S.G., 2-month S.G., etc.) and then include the raw course history average in the regression as well:

\( adj.score_{i} = \beta_{0} + \beta_{1} \cdot adj.score.2Y_{i} + \beta_{2} \cdot adj.score.2M_{i} + \beta_{3} \cdot adj.score.ch_{i} + \epsilon_{i} \)

The dependent variable is the adjusted score, and the regressor of interest is \( adj.score.ch_{i} \). This is not quite the same as what we are doing in the body of this article. The difference is very subtle; the interpretation of \( \beta_{3} \) is the effect of a player’s historical course-specific scoring average on this week’s performance after controlling for the player’s current form (as defined here by 2-year S.G. and 2-month S.G.). Conversely, in the body of the article, the method we are using can be thought of as controlling for the form of a player at the time they played the course. To clarify with an example: the former method asks: “Does the fact that Luke Donald played better at Harbour Town in the past than what his current form indicates mean he will play better this week?”, while the latter method asks: “Does the fact that Luke Donald has typically played better than his form at the time at Harbour Town in the past mean he will play better than expected at Harbour Town this week?” If my intuition is right (and it may not be, I’m still grappling with this a bit) these two methods would seem to be the same if a player’s form hasn’t changed much in time period we are considering. Anyways, for what it’s worth, doing it with the regression controlling for current form gives almost identical results to those reported in the body of the article.

3. Intuitively, the R-squared of a regression is the proportion of the variance in the dependent variable that is *accounted for* by the included regressors. In the simple case of a single independent variable (e.g. X), the R-squared is equal to:

\( R^{2} =  \beta_{1}^{2} \cdot Var(X) / Var(Y) \)

where in our context, X is the course history variable, Y is the current week’s average score, and \( \beta_{1} \) is the regression slope coefficient. Evidently, this measure can only be small if the coefficient is very small, or the variance of X is small (relative to the variance of Y). In the full data, the variance of X is 1.48, while the variance of Y is 3.03; therefore, it’s the small size of the coefficient that is driving our very small R-squared (~0.0002, or 0.02% in the full data) result.