COVID19 update, June 15, 2020: Ivermectin redux; “modelers have failed’

(1) The Jerusalem Post interviews Prof. Eli Schwartz, the head of the tropical medicine department at Tel HaShomer hospital in the Tel Aviv borough of Ramat-Gan (one of the “Big Four” research and teaching hospitals in Israel, together with Sourasky/Ichilov in central Tel Aviv, Hadassah in suburban Jerusalem, and Rambam/Maimonides in Haifa) about a drug repurposing study involving ivermectin (an antithelmintic/anti-worm drug familiar to veterinarians and travelers to tropical countries, but not to most physicians in Western countries.

The discoverers of this drug shared the 2015 Nobel Prize in Medicine and Physiology. With the discoverers of the next-generation antimalarial artemisinin. An Australian study, part of an effort to find repurposeable already-approved drugs, found a few months ago that ivermectin liquidates the virus in vitro (i.e., in a test tube), which prompted several clinical trials:

https://doi.org/10.1016/j.antiviral.2020.104787

Here is a preprint about a retrospective, open-label study in several Dade County, FL hospitals (i.e., the Miami area):

https://www.medrxiv.org/content/10.1101/2020.06.06.20124461v2

280 patients with confirmed SARS-CoV-2 infection (mean age 59.6 years [standard deviation 17.9], 45.4% female), of whom 173 were treated with ivermectin and 107 were [given] usual care were reviewed. 27 identified patients were not reviewed due to multiple admissions, lack of confirmed COVID results during hospitalization, age less than 18, pregnancy, or incarceration.

Univariate analysis showed lower mortality in the ivermectin group (15.0 % versus 25.2%, OR 0.52, 95% CI 0.29-0.96, P=.03). Mortality was also lower among 75 patients with severe pulmonary disease treated with ivermectin (38.8% vs 80.7%, OR 0.15, CI 0.05-0.47, P=.001), but there was no significant difference in successful extubation rates (36.1% vs 15.4%, OR 3.11 (0.88-11.00), p=.07). After adjustment for between-group differences and mortality risks, the mortality difference remained significant for the entire cohort (OR 0.27, CI 0.09-0.85, p=.03; HR 0.37, CI 0.19-0.71, p=.03)

In plain English, p=0.03 means there’s a 3% chance that the difference is due to coincidence, while p=0.001 means there is just one chance in a thousand this is a coincidence. 

Considering this is a cheap and widely available drug, this sounds like great news.

(2) In a blog post at the IIF (International Institute of Forecasters), Prof. John Ioannides of Stanford and two colleagues from Northwestern U. and U. of Sydney say bluntly “Forecasting for COVID-19 has failed”. They go on to analyze the failures in detail and to conjecture reasons for them — which go further and deeper than “fog of war”. Read the whole thing — I can’t do it justice with selective quoting. Just a taste:

Failure in epidemic forecasting is an old problem. In fact, it is surprising that epidemic forecasting has retained much credibility among decision-makers, given its dubious track record. Modeling for swine flu predicted 3,100-65,000 deaths in the UK [11]. Eventually only 457 deaths occurred [12]. The prediction for foot-and-mouth disease expected up to 150,000 deaths in the UK [13] and led to slaughtering millions of animals. However, the lower bound of the prediction was as low as only 50 deaths [13], a figure close to the eventual fatalities. Predictions may work in “ideal”, isolated communities with homogeneous populations, not the complex current global world.[…]

Let’s be clear: even if millions of deaths did not happen this season, they may happen in the next wave, next season, or with some new virus in the future. A doomsday forecast may come handy to protect civilization, when and if calamity hits. However, even then, we have little evidence that aggressive measures which focus only on few dimensions of impact actually reduce death toll and do more good than harm. We need models which incorporate multicriteria objective functions. Isolating infectious impact, from all other health, economy and social impacts is dangerously narrow-minded. More importantly, with epidemics becoming easier to detect, opportunities for declaring global emergencies will escalate. Erroneous models can become powerful, recurrent disruptors of life on this planet. Civilization is threatened from epidemic incidentalomas.

(3) In brief:

COVID19 update, April 10, 2020: all models are wrong, but some are useful.

“All models are wrong, but some are useful.” Thus spake one of the leading lights of statistics in the 20th Century, George E. P. Box FRS https://en.wikipedia.org/wiki/George_E._P._Box

Models can be useful however — if you remember that a map is not the territory, a representation is not an object, and a model is not reality. Sadly, the distinction between a theory and a model is lost on most people who are not scientists themselves (and sadly, on some people who call themselves scientists).

We hear a lot in the media about how pessimistic predictions of some modelers later had to be revised downward by nearly two orders of magnitude. Lots of snickering, for sure, but understand the incentive structure here. If you ask a modeler, “just how bad can this get?” and she gives you her worst-case estimate — and later the data coming in cause a drastic revision downward — you will normally be grateful. If she comes instead with a best-case estimate, and it later turns out to be much worse, you are likely to blame the modeler “if only you’d warned me, I’d have pushed for much harder measures”…

That said, some of the “models” now being referred to aren’t really models in the usual sense at all, but rather nonlinear regression fits to actual data, with uncertainty bands provided. I’m sure that whatever function the IHME people use for fitting COVID19 statistics in various countries is a bit more sophisticated than sigmoid functions, but the “total deaths” graphs look quite similar to a sigmoid to this mad scientist’s eye.

https://covid19.healthdata.org/united-states-of-america

The nice thing about such “phenomenological models” [*] is that they are trivially adjusted to new data as they come in: add the data point, refit, get your new uncertainty band, and presto!

In this morning’s DIE WELT, I read an interview with a mathematics professor named Moritz Kaßmann at the University of Bielefeld , who got interested in this subject early on as one of his students returned from Wuhan and gave him the heads-up “this [expletive] is going to hit in Germany as well”.

Anyway, he had a good look at the German COVID19 statistics, and noted that they were surprisingly well fitted by the following (for people in my day job) very simple function:

f(t) = A exp(Bt – Ct^2) = A exp(Bt) exp(-Ct^2)

where ln(x) represents the natural logarithm of x, A corresponds to the number of cases at t=0, the exp(Bt) term corresponds to the exponential growth phase and the Gaussian term exp(-C t^2) corresponds to the damping phase, which is stronger as C grows larger, and absent if C=0 (since exp(0)=1).

Now if you take the logarithm of the data ln f(t), this fit becomes simplified to a quadratic regression

ln f(t) = ln(A) + B t – C t^2

At low t, this function will show exponential growth, but at longer t, the Gaussian damping will become more prominent, and eventually a turnover will occur. Now let’s apply this to the active cases in Germany, for example (data taken from the Johns Hopkins website):

data points in blue, regression curve in orange

Active cases are defined as “diagnosed – cured – deceased”.

Well, if such a simple and “parsimonious” (in terms of only having 3 parameters) model has such a high “coefficient of determination” — R^2 = 0.9977 means that 99.77% of the variance in the data is reproduced by the fitted curve — there has to be something to it. You don’t find such high R^2 values under a horse’s tail, pardon my Dutch.

Extrapolating the fitted function will get more uncertain as you leave the actual data range, to be sure, but we are clearly just days from the plateau phase between April 14 and 19.

Here Prof. Kaßmann discusses his method (in German) on YouTube: https://www.youtube.com/watch?v=9ODf9GKEuXQ.

He notes that a lot of discussion in Germany centers on the “doubling rate” (how much time it takes for the number of cases to double), but that the methods for evaluating teh doubling rates are kind-of slapdash: extracted from the daily growth rate or the moving average over 5 days thereof. With a fit function like this, it can be evaluated analytically using just 

t2= ln(2)/( -2 C t + B)

Where -2 C t + B is the first derivative of – C t^2 + B t + A — if you like, the slope of the tangent of the curve at point t.

Note that for large enough t, t2 becomes negative as the curve turns over : at that point –t2 becomes the halving rate.  According to “Kaßmann ’s slope trick”, the doubling rate for active cases in Germany is already in the once-a-month region. 

I made similar graphs for Belgium and for Israel: I get basically the same shape, 8 days and 4 days shifted right, respectively. The coefficients of determination for these primitive fits are 0.9974 and 0.9976, respectively.

We are not out of the woods yet — this is not the time to get cocky. But the light at the end of the tunnel is becoming visible. And thus, as a sustained shutdown will wreak increasing havoc on the economy, this is the time to get serious, creative, and agile about “back to normal” measures. In particular the food supply chain cannot be left untended. Society can live without rock concerts, soccer games, or discos. It can definitely live with telecommuting for IT professions. But it cannot live without agriculture and food processing, or (G-d forbid) the price we pay in lives might well exceed the toll from the virus.

To my Jewish readers, mo`adim le-simcha. To my Christian readers, have a meaningful Good Friday, and soon a happy Easter.


[*] The term “phenomenological” refers to a fit that aims to reproduce data (the numbers as they are) but does not make any physical, chemical,… model assumptions about the form of the equation.