Forecasting
Show Notes
Forecasting resources
- Superforecasting: The Art and Science of Prediction (book) by Philip E. Tetlock and Dan Gardner (amazon goodreads)
- The Signal and the Noise: Why So Many Predictions Fail - But Some Don't (book) by Nate Silver (amazon goodreads)
- How to Measure Anything: Finding the Value of Intangibles in Business (book) by Douglas Hubbard (amazon goodreads)
- Planning Fallacy (article) by Eliezer Yudkowsky. Discusses Inside/Outside view as well.
Self calibration
- Prediction Book
- AirTable calibration template by Ulysse Pence
- Notion.so calibration template by Ulysse Pence
- Credence Calibration Game by Alexei Andreev, Zachary Alethia, and Andrew Critch
- Decision Journal (more qualitative predictions)
Prediction Markets
- Augur
- PredictIt
- InTrade (inactive)
- Prediction Markets (article) by Gwern
Music
- Intro music: Vlog Music Cafe Type Hip-Hop Instrumental Chill Lo-Fi Beat by Oliwia Orłowska
- Outro music: Game Over [Super Mario World Lofi/hiphop remix] by Neighborhood Vandal
Transcript
Ulysse: Hey, this is the Growth Podcast with your hosts, Ben and Brendon.
Brendon: Today, we'll be talking about forecasting: the art and science of predicting the future.
Brendon: So Ben and I decided to talk about forecasting today because forecasting, as it is, currently has a few major issues. The main issue is that people do make a lot of forecasts, but the problem is those forecasts that are being made aren't actually very reliable and don't allow us to predict the future as well as we'd like to. And that's important because making predictions, obviously seeing what's going to happen in the future can have a big influence on the decisions that people make in the present. I would say that there are three major issues with the way most forecasts currently work.
Brendon: Number one would be that the forecasts aren't probabilistic. Like if you say, I think it's somewhat likely that XYZ will happen. Somewhat likely, what is that? Is that 25%? Is that 40%? Is it 60% probability? Is it like, does that mean 85% probability? It could mean almost anything. So the fact that the words we use to define predictions themselves actually have minimal meaning makes it so that we can't actually ... If we just use words that define the likelihood of things happening, we don't actually have a good grasp of the likelihood of things happening. And obviously there's a really big difference between a 1 in 10 chance of something happening and a 9 in 10 chance of something happening. And somewhat likely just isn't going to cut it.
Ulysse: Right. If you're not specific with your words, you always sort of have this plausible deniability.
Brendon: Exactly.
Ulysse: You can kind of deny any kind of claim that you made.
Brendon: Yes. And that relates to the second issue I see with most forecasts, which is that they're also not very specific. In fact, that is because of the incentive you just mentioned to have plausible deniability around whether or not your forecast was correct or not. Because if you word your forecast in a vague way, or in a way that's difficult to judge after you've made the forecast, then you can make your record appear like you're always right. And so the unclear, subjective probabilities, and also the unclear and subjective specificity of the forecasts definitely contribute to that issue.
Brendon: And then I would say number three, sort of a more minor problem, but a lot of the forecasts that are being made don't come built in with time horizons. And so if you just say something's going to happen without saying when it's going to happen. I mean, obviously that's not very helpful for decision making and that also makes it somewhat easy to make it appear like you're correct. Because sometimes let's say you are forecasting, let's say something like the success of a certain product and might come a time where let's say the product might additionally perform poorly at launch, but then perform well, like let's say a few years later. And so if you predicted that the product would succeed, I mean, of course at various points in time, your forecast would appear accurate or not accurate. And so that's why the time horizon is really important.
Ulysse: Right. Or you could look at the common example of predicting the apocalypse. It seems like if you don't give a date, you can just always say the apocalypse is coming and then everyone should do something.
Brendon: Yeah, exactly. I think there were forecasts made all the way in the past. And some people say, hey, it looks like this is, is now coming true. Like, hey look, maybe 9/11 pattern matches to this forecast that was made hundreds of thousands of years ago. Well, I mean, yeah, again, it's not specific and there's no timeframe. So yeah. I mean it is unfortunately pretty easy to make it so that a forecast actually was at least somewhat right based on the time.
Ulysse: Yeah. And it's nice that you or the person making the forecast is right more of the time by using this tactic. But at the same time, all the people that listen to them are basically deceived. But if you can actually put your reputation on the line and specifically say when, and what you think is going to happen, then you have the opportunity to really gain people's trust. Yeah. Unfortunately not a lot of people do that though, either because they don't know or because like you said, if you don't put down concrete details, then you're always right.
Brendon: Yeah. And unfortunately, a lot of the people that are currently making forecasts might actually not be that good at forecasting. So there's definitely I guess, a strong incentive to just maintain the status quo of like having these sort of like vague forecasts that don't really mean all that much because it allows people who are say thought leaders or in actual positions of power to make these sort of forecasting statements and protect their reputation for being able to make good decisions and accurately predict what's going to come next.
Ulysse: Right. So some people might be wondering Brendon, how do I make forecasts? How do I get better at forecasting?
Brendon: One of the first things you need to do to improve at something is to measure how good you are doing at it? Or, I mean, this also applies to just improving anything in general. And with forecasting, it is a little bit difficult to see if your forecast is actually correct. So a good forecast can have the components of first of all, having a probability estimate, like you have to give a precise number. Like there is a 70% chance I think this will happen. You've got to make it specific and you've got to attach a timeframe to it. And of course, one of the issues is even if you do all of that, let's say you think something will happen with a 70% chance and it doesn't happen. Well, the thing is that doesn't mean you are wrong because maybe you're in the world.
Ulysse: Right.
Brendon: Let's say you're in one of those 30 out of a 100 worlds where the thing you predicted that won't happen doesn't happen. And so the issue there is on a individual basis, you cannot actually judge if a forecast is right or not. And obviously that's a problem for measuring your skills at forecasting. But fortunately there is actually a way to overcome this and the way to do this is by assessing your calibration.
Ulysse: Right.
Brendon: And that's basically a term that refers to the measure of like out of all of the times you say something will happen with a certain probability, out of all of those forecasts that you make, let's say at like 60% probability, what percent of those forecast actually come true?
Ulysse: Right.
Brendon: So if you say there are 100 things that will happen all of them with 60% probability and exactly 60 of those 100 predictions actually happen, then you have perfect calibration. When you say something will happen with a 60% probability on average, it will actually happen with a 60% probability. So, that's the best we can do without super powers of being able to assimilate worlds. And so if you make a lot of forecasts for yourself, then you can get these accuracy numbers.
Ulysse: Right. When you're making predictions, you're sort of relying on what you know, and what you feel in your body in order to come up with some number like this. And so that's what self-calibration is all about, is sort of like you're trying to translate how you feel and what you know into some quantified value. And this is pretty useful because you can sort of aggregate together different probabilities to come up with more complex probabilities.
Brendon: Right. So, yeah, I think you're describing one of the ways that people can make better forecasts. And that is to decompose one forecast into like all of the possible things that could lead to that forecast coming true and then separately forecasting those individual items.
Ulysse: Right.
Brendon: Because the more you can break something down and then make those more granular forecasts, that breaking the problem down might increase the accuracy compared to, if you just thought, hey, what's the probability, I'm going to die in a car crash tomorrow? You could ... One of the things you could do is you could just say, I don't think that's going to happen very much and you just sort of like ...
Ulysse: 5%.
Brendon: You could just come up with some number. Yeah, exactly. Like that's yeah. Like that's 5% or maybe be like, 0.1%. I mean, 1 in 1,000,000 and 1 in 10,000. Obviously just like pulling those numbers out of nowhere, like just using the first number that comes to mind isn't actually going to result in a good forecast. And so both breaking the problem down and then researching the different aspects of a problem can be really helpful when making accurate forecasts. If you're forecasting something that has some research available on it, obviously it does make sense to at least try to get a rough idea of the information around that particular forecast. So for example, if there is a statistic about the average likelihood of dying in a car crash on any given day, you might want to take that number from the research and then also adjust it based on your own personal circumstances.
Brendon: For example, if you're driving a really safe car, you might want to reduce your chance of dying if you're going outside every day, wearing helmet, a neck brace, full body armor, and you're driving, you have this mechanism in your car that restricts you to driving at a certain speed depending on the road you're on. Then definitely you might want to further reduce your probability based on your own individual circumstances.
Brendon: Breaking the problem down can also really help. For example, let's say that number wasn't available, you might want to instead use the number of the average probability of getting into a car crash. And then perhaps, that number could be combined with the average probability of actually dying in a car crash.
Ulysse: Happy thoughts. Yeah. This sort of reminds me of something we've talked about before, which is the inside and outside view, where the outside view is basically what information do you have that doesn't pertain to the specific situation? This might be things like where you've said, how often do people die in car crashes in general? And then there's the inside view, which is all about taking what you know about a given situation. Like, am I good driver? Do I live in a country or do I live in a state or do I live in a place where people drive erratically or people drive safely? Is my car safe? Do I drink before I drive? Those kind of things, that'll count as the inside view. Stuff that you know might help you make the prediction. And oftentimes people will generally rely exclusively on the inside view, which I think is a failure mode for forecasting, where like you said, it can help to look at the general stats for things because chances are, you're not doing something for the first time.
Brendon: Yeah, exactly. And then that concept actually really closely relates to the Planning Fallacy, which is a cognitive bias where when making predictions or we're making plans, people tend to just imagine the best possible thing that could happen.
Ulysse: Right.
Brendon: They tend to imagine, like let's say everything going according to plan. And then they use what they imagine to generate an estimate for let's just say, how long it's going to take for a project to be completed. Or as a more personal example, how long it will take for someone to let's say, finish writing a book. And so the problem there is that's exclusively inside view. A much better way to do this sort of planning and estimation is actually take the outside view. And that can be challenging to find in some circumstances, in the case of how long it'll take you to finish a certain book. The outside view would probably be just like the average amount of time it takes for someone to finish a certain book.
Brendon: And of course, if you can even get the numbers for like finishing a nonfiction book or finishing a book of a certain length, that might be even better. I mean, clearly there's a difference between like a children's book and an academic work. You can just take whatever outside view information you're able to find and then customize it. Let's say if you are writing a simpler book that will take shorter time, you might want to just take the average time for writing a book and then maybe shorten that to some degree that you feel is correct. And I would say that's generally a very good methodology for making forecasts both sort of in general, with like worldly events and also making personal forecasts.
Ulysse: Right. I would say there's sort of a science to reconciling the outside inside view here. And I feel like that's where a lot of the skill and forecasting comes from.
Brendon: So a few more things related to using the inside and the outside view for forecasting. Good forecasters typically have a variety of attributes, many of these attributes, and also the general best practices in forecasting are covered in the book, Superforecasting. By the way, if any of our listeners are interested in doing further research into this topic, a few attributes that I can remember off the top of my head are it's generally better to be someone who views the world in a probabilistic manner, rather than thinking, there's, say a strong component of destiny.
Ulysse: Right.
Brendon: With regards to like what events actually ends up happening. And of course that's sort of related to not necessarily being super mathematically proficient, but just kind of being able to view forecasting in this more in a miracle way, understanding the difference between how 95% is actually a pretty different forecast than 85%. Those small changes in numbers can actually make a big difference in the odds, especially as you get closer and closer to either 0 or 100%.
Ulysse: Like people who are methodical, I think tend to be good forecasters.
Brendon: Yeah, exactly. And also sort of related to this, being emotionally distant or unattached from what is being forecasted is of course better. Say if there are a lot of forecasts being made, for example, in politics, like will North Korea, let's say fire off the missiles or detonate a nuclear weapon this year. Some people might have some sort of strong views on that, or maybe something related to the Israel and Palestine conflict. So always good for people to separate themselves emotionally, if possible, when making forecasts.
Ulysse: Right. Or if some part of your identity is at stake. Like if one of the outcomes will be in that you have to rethink some part of your core values or your core beliefs, then you're probably going to be biased one way or the other.
Brendon: Yeah, exactly. And I think this relates in general to the conflict between say optimism and realism. Sometimes in a lot of say personal or business situations, someone might be making forecasts where they have a personal stake. And if you can't have separate teams, like one person, let's say motivating the team and one person that's making the forecast and the decisions about how to continue on whether or not to continue, then the best you can do as one person is to kind of separate out the part of yourself that's giving you motivation. And the part of yourself that is making the forecast. Like for example, will I even write the book this year? Like you might be looking back on your past history and think that you actually don't have a good shot at finishing the book if you don't have a good track record of writing books in the past. Let's say you've tried a few times. Maybe you don't think the probability is high. Maybe you think there's only like a 15% chance you'll actually end up writing the book this year. And of course that can be demotivating.
Brendon: Whereas of course, if you just think it's definitely going to happen, that will be very motivating. So I think that conflict there is difficult to resolve. What I do personally is I try to make decisions and also forecasts when I'm in a state of mind where I'm emotionally detached from the outcomes. And then after I enter, let's say forecaster decision mode. Yeah, that's when that forecast or decision gets generated. But I would say the rest of the time, like when I'm actually doing the work, I'm trying to maintain a more optimistic framing. So in that way, that's how I try to have my cake and eat it too.
Ulysse: I like cake.
Brendon: I try to have that rational decision making and forecasting while also having the benefits of say optimism or thinking in a way that is a little bit like less realistic or failure focused because that can be necessary for motivation.
Ulysse: Right.
Brendon: So we've touched on many different types of forecasting. There are some types of forecasting that are sort of broadly applicable to people and forecasting that can also be done by the majority of the population, because there is publicly available information. For example, forecasting world events, or even within a company, you could say there might be a large group of people that are able to make forecasts say on whether or not a product launch will succeed, or what time will the product actually be launched. And I think that the sort of broader areas of forecasting are more studied, just because you can see how a large group of people make forecasts and see if there's any way to get a better forecast by looking at the wisdom of the crowd. Something that I think has been a little bit more under researched, but might be in many ways more useful to people is actually making personal forecasts.
Brendon: So if you can accurately forecast how likely you are to complete certain projects, pick up new skills or get a new job or sort of a lot of the elements of your life and achieving goals that are important to you, that can really help improve the decisions that you make. For example, you might decide to not embark on a project, if you don't actually think does a good chance that you'll have the motivation to finish it. Or you might make some decisions and decide to draw on your runway, if you think there's a good chance you can land another career in a relatively short period of time.
Ulysse: Right.
Brendon: One of the things that Ben and I have spent some time doing is logging a lot of forecasts about ourselves and seeing how calibrated we are for our own personal decisions. I think it's valuable for people to actually track a lot of the forecasts that they make, and also track the category or the title of those forecasts. Because then you can see if you're, let's say more accurate at predicting economic things, or if you're more accurate at predicting your own capabilities to finish projects. And then you can sort of look at your calibration on a domain specific level and then if you see that you're really good at one domain, then if you're good at predicting, let's say economics or finance, maybe you should trade stocks more. If you're not as good as predicting whether or not you'll finish a project, then maybe you need to do some more work on overcoming the Planning Fallacy on a personal basis. And so I think that forecasting by domain can be incredibly valuable.
Ulysse: Yeah. One thing you mentioned was you can try to predict whether a product will be successful or not when it launches or something like that. You can make a case at the stock market for publicly traded companies is sort of the place you can go and actually make money on being very accurate and predicting well. However, the stock market is totally attached to private corporations and generally there's infinitely more things that you can't really bet on. So there's this concept called a prediction market, which basically allows you to make bets on what's going to happen in the future. And so if there's some conventional knowledge about how things work or what's going to happen in the future that you think is totally off base, you stand to make a lot of money in this kind of forum. Unfortunately, there's not really a lot of places where you can do this, I think for ... Yeah, do you know why?
Brendon: In the United States, most popular and legal prediction markets are limited to having forecasts and betting around political events and circumstances either in the United States or internationally. And I think there may be a change in the coming years with the rise of decentralized prediction markets like Augur and also regulatory changes that may open the door for a lot of different types of prediction markets to pop up. So I think this is where the future is headed, but of course we'll just have to wait and hope that there are more opportunities for people to exercise their forecasting skill in a way that they can personally see benefit by making money and also help society.
Ulysse: Right.
Brendon: By helping everyone be able to look at the result of the prediction market and say, hey, let's say the crowd thinks there a 85% chance that COVID-19 associated problems will continue throughout 2020 or something like that.
Ulysse: Right.
Brendon: Or like, yeah, the crowd thinks there's a 75% chance a COVID-19 vaccine will not be developed and deployed by the end of 2020. And of course having all this information publicly available can really help everyone make better decisions.
Ulysse: Right. Just like in the stock market, you would assume that without specific domain knowledge, you probably won't be able to make much profit by doing quick buys and sells because most of what's out there has been factored into the price of the stock. Similarly, in prediction markets, that probability is on each bet, in a prediction market are generally going to reflect what's out there. And so, like I said, if you think that you know better, you stand to make money and then the price will shift and then slowly but surely hopefully the price slash probability of these predictions, of these bets will converge on what people actually think.
Brendon: Yeah. So research has definitely indicated that forecasting aggregators, that don't involve money and also prediction markets that do involve financial transactions tend to generally be pretty well calibrated and make decent forecasts. But one thing that I would like to sort of push back on is it's certainly very helpful to have domain knowledge when forecasting. But in a lot of the research, I think that domain knowledge hasn't been the main or like the most significant correlate with say calibration and forecasting accuracy. That's been following more of the best practices. Like for example, taking the outside view into account and remaining emotionally separated from the outcome and also just thinking probabilistically. Using those best practices in forecasting has been shown to dramatically improve forecasting accuracy as just compared to something like domain knowledge.
Ulysse: So you're saying you think like the meta methodology, like how to actually make forecasts is far more important than being closer to the information?
Brendon: Yeah, exactly. Definitely. The meta of just like making good forecast in general is say probably much better actually than having historically learned a lot about, or having strictly made a bunch of forecasts in any particular area. I think that is what the academic research currently suggests. And that's really great news for our listeners because that means that you can be a really great forecaster and you can use that forecasting to make really, really great life decisions, business decisions. All without needing to be an expert in all the different domains and fields of knowledge out there that might be relevant to you personally.
Ulysse: Thanks for listening. In the show notes, we've included links to all the resources we spoke about as well as some software solutions you can use to start calibrating yourself and start doing your own forecasts. The beautiful intro music to this episode was created by Oliwia Orłowska. This relaxing melody is by Neighborhood Vandal. Links to both of these songs can be found in the show notes.
To hear more, subscribe to my newsletter or follow me on Twitter.