One of the major difficulties founders face as their businesses start to scale is modelling future sales and expenses. Modelling (also known as forecasting) allows executives to better manage business performance, but also most (post-seed stage) investors will want to see at least a rudimentary financial model justifying the requested valuation.
Unfortunately modelling is incredibly difficult, even for founders who are financially literate. That’s because the hardest part isn’t the model itself (there are plenty of templates you can use), but because the founder needs to develop a set of key assumptions (like sales growth, cost of customer acquisition and staff efficiency) to populate the model.
Getting any of the key inputs materially even slightly off means the model will likely turn out completely wrong. For most startups, there is little historical information (what we call unit economics) to base the model on, so the degree of difficulty is even higher.
Another group of professionals who rely on models are epidemiologists and infection disease experts. In the case of a virus like COVID-19, epidemiologists model the spread of a virus with a number of uncertain variables. Early in the pandemic, key variables underlying their models were the ‘R’ number (that is the rate at which a virus spreads) and infection case fatality rates (how many people who catch the virus will ultimately die). To add more complexity, these assumptions change based on non-pharmaceutical interventions like social distancing, closing businesses, wearing masks and isolation. The complexity is further increased with vaccines, which reduce fatalities and also curb transmission.
Get business news first
Sign up to SmartCompany’s daily newsletter
Epidemiologists therefore need to make some very big assumptions to feed their models. This can (understandably) turn out really badly when there isn’t much data and even epidemiologists like all of us, are influenced by the personal biases. Perhaps the best example of this was the Imperial College of London’s Neil Ferguson’s early modelling, which suggested 500,000 British deaths from COVID-19 (even with one of the world’s highest relative death rates, there have been 127,000), or Raina MacIntyre’s claims that hundreds of thousands of Australians could die of the virus.
Ferguson and MacIntyre (whose views are still regularly sought by media) can perhaps be forgiven for their modelling disasters. Like early-stage founders, because they had very little data to input in their models. Choosing a slightly higher R number meant the models were completely wrong, in MacIntyre’s case, by orders of magnitude.
Yet with the pandemic now 18 months old, epidemiologists have a lot of real life data to feed their models. It’s a bit like a startup raising a Series B round of funding — they understand the business’ product market fit and have enough unit economics (such as an understanding of the cost of customers and their churn rate) to be able to develop a more reliable model.
But like a founder, if you’re creating a model for possible future deaths from COVID, you need to use assumptions based on the best real-life data you can find. Enter one of Australia’s most respected medical research bodies — the Burnet Institute, which has a proud history of research on infectious diseases like HIV and malaria.
Burnet’s key staff have been a high-profile advocates of significant interventions to curb the spread of COVID and, like an ambitious founder, these biases appear to have influenced their modelling. This reached a peak last week, when Burnet released modelling that shows several deeply troubling scenarios, including a finding that 4,885 Victorians could die this year from COVID-19 without a major public health response.
The problem with Burnet’s modelling (which doesn’t appear to have been submitted for peer review) is that it adopted several major assumptions that contradict actual real world data. That’s like a founder claiming their startup’s cost of customer acquisition was $5, when its last six months of actual data showed that was costing $10 to get a new customer via Google search.
So let’s take a closer look at Burnet’s key assumptions.
First, vaccine efficacy. In both of its scenarios, Burnet claimed vaccine efficacy against infection would be 50%. But a recent study considered a data set of 5.4 million people in Scotland and found that “four weeks after the first doses of the Pfizer BioNTech and Oxford AstraZeneca vaccines were administered the risk of hospitalisation from COVID-19 fell by up to 85% (95% confidence interval 76 to 91) and 94% (95% CI 73 to 99), respectively.” A similar dataset from Public Health England indicated 89% efficacy for AZ and 90% for Pfizer.
But it wasn’t just the UK. The CDC in the United States reported a near identical efficacy for the Pfizer vaccine (AZ isn’t yet used in the US). Burnet’s assumption of a 50% vaccine efficacy was wrong for the Pfizer and AZ vaccines, although it may be correct for the less effective Chinese vaccines that are being used in countries like Chile.
More curious was Burnet’s assumption that vaccines would have a 94% efficacy at preventing deaths. Astra Zeneca’s clinical trials involving tens of thousands of people indicated that the vaccines prevented death at a 100% efficacy. If you’re suspicious of potentially self-serving clinical trials, then the UK technical data released last week provides useful real-life evidence. It’s difficult to determine the efficacy against death as the data set isn’t complete, but out of 33,206 confirmed cases, there were 12 people who died with COVID among those who were vaccinated. Singapore yesterday released a small sample size of data that noted there were zero deaths among vaccinated cases since late April.
Finally, Burnett’s model assumed that only 250,000 Victorians would be vaccinated each week. This was a critical as it determines the time to vaccinate the population. However, last week, 335,000 Victorians were vaccinated, and that number has been increasing week-on-week. That means Victoria will reach UK levels of vaccination for those over 70 (the key risk group) within a month or two.
Note to founders: investors aren’t stupid, when you choose a clearly incorrect assumption that skews the output, you lose credibility.
Investors of course rarely have the subject matter expertise of a founder, but experienced investors will use pattern recognition to essential ‘sense check’ the model. So let’s sense check Burnet’s upper prediction of 4,885 deaths with real-world data.
In the 18 months since the pandemic began, 0.04% of people worldwide have died; however, in the hardest hit countries that number increases to 0.20%. But that was over 18 months, so let’s reduce the fatality rate slightly to 0.15% to cover only one year. So taking the highest relative fatalities and minimal vaccinations, that would equal around 9,000 deaths.
However, on 6 June, 58% of the at risk 70+ group had already been vaccinated. This number will likely hit 80% within a month, which means in reality, even using one the world’s highest fatality rates, that’s likely around 2,000 deaths. Using the global average, the number is closer to 400 deaths in the next year, a fraction of Burnet’s upper estimate and a third of its more optimistic scenario.
Of course, a model showing that COVID will kill far less people this winter than regular influenza is a lot less exciting than one which predicts 5,000 deaths. That’s a bit like a startup model not going ‘up and to the right’.
The key lesson for founders: models are never right, it’s just a question of how wrong they are. And there’s no point looking at a model’s output (the curve) without critically analyse the underlying assumptions.
Adam Schwab is a founder and angel investor, author of the best-selling Pigs at the Trough: Lessons from Australia’s Decade of Corporate Greed, and host of the From Zero podcast, which features conversations with Australia’s most successful founders.