It’s been a very forecast-y week. Between (finally) reading Phil Tetlock’s excellent Expert Political Judgement, going to a half dozen panels at ISA on forecasting and event data, and today’s NPR story about the Good Judgement Project, I’ve been thinking a lot about how to make political forecasts and how we know when they’re good. I wanted to share one of the tools I’ve been using for forecasting and calculating subjective probabilities.

The Good Judgement Project is a study of political forecasting, run by Phil Tetlock and others and funded through IARPA’s ACE program (the NPR story linked to above gives a good background). The project aggregates forecasts by participants, studying whether some people can consistently out-perform chance and whether those people can be identified and trained. Since becoming a participant this summer, I’ve thought a lot about subjective probability and how to update my beliefs in light of new evidence. Bayes’ rule is perfect for this task, and I’ve tried to use it to anchor my forecasts to the base rate or to my prior belief about the event occurring. Because the Good Judgement Project (in my experimental condition, at least) simply asks for a point estimate for the probability of event occurring, the basic version of Bayes’ Rule with point estimates works well for this.

If you can establish p(H), p(Evidence|Hypothesis), and p(Evidence|~Hypothesis), you can calculate your posterior trivially, either by hand or in an simple program. While this approach works for simple anchoring to the base rate/prior, it omits one of the great strengths of Bayesian analysis, which is the information about uncertainly that’s contained in the distributions, rather than point estimates, for the prior and posterior.

Real, full Bayesian analysis (see, e.g. on Gelman et al.’s excellent Bayesian Data Analysis) treats all inputs and outputs as probability distributions, and calculates the likelihood based on the probability of observing a series of data points, given a parameter. The posterior is then approximated computationally. This full analysis seems excessive and doesn’t necessarily make sense for inputs that are completely subjectively determined.

What I wanted, instead, was a way of melding a completely subjective prior distribution with a simple subjective estimate of the likelihood (i.e., how likely is this event assuming the hypothesis is true, and how likely is the event assuming the event won’t take place).

This calculator is a simple way of eliciting a prior distribution, asking about the likelihood of an event under the prior distribution, and then spitting out an update probability distribution based on that information. I wrote it in R using ggplot2 and Shiny (with plans to update it for ggvis now that it’s out). It’s hosted by shinyapps.io on an Amazon EC2 Micro instance, so give it some time to update after you change your parameters.

Any feedback on the calculator is welcomed and I hope you find it useful.

Andrew – Great post. I’m the CEO of Inkling Markets (http://inklingmarkets.com) and we are working with both the Good Judgment project (you may or may not use our prediction market as part of the study) and also with another IARPA sponsored project called SciCast which is focused on Science and Technology forecasting.

I think you would be quite interested in SciCast (https://scicast.org) as it uses a Bayes net to support the forecasting activity. In fact, SciCast is a “combinatorial” prediction market which means the probabilities of outcomes can be made to affect each other. Think (If the probability of snow is high on Thursday, the probability should also be high that there will be traffic delays on the beltway.)

Anyways, feel free to reach out if you want to talk more at @scicasters or @inklinghq or @amsiegel ğŸ™‚