My last post looked at data in FL & OH in the pre-Covid and post-vaccine eras and used other health indicators to show that differences in outcomes could just as easily be correlated to factors other than vaccination rates or (more pathetically) political affiliation.
@Bart Miller - regarding your comment in the previous post. It may be that there are truly differences in outcomes of the age groups, or like I talk about in the 2nd section above, might just reflect different inaccuracies & biases of their forecasting model. Honestly, this paper needs to line a bird cage.
This sounds eerily similar to the Canadian study that said the unvaxxed were more likely to be in car accidents because of “Distrust of the government, a belief in freedom, misconceptions of daily risks, faith in natural protection, misinformation and personal beliefs”.
I don't think there's anything wrong with their methodology like you suggest. It is common to backtest your model and present the results in order to demonstrate how good it is. In fact, if their 2018/19 data had been flat 0, it would indicate that their model was perfect. But, therein lies the rub, having a perfectly calibrated model for the training phase does not necessarily mean it will accurately predict the future. Recently, I have had to make this point more times than I should like. Unless the model makes some sort of allowance for trend and mean reversion, it isn't as useful as it might seem. In the case of deaths, this is pertinent as it manifests in the "dry tinder" or "pull-forward effect" depending on which way you look at it. Either, there is increased deaths after a period of lower than usual deaths or the opposite. So, two years is unlikely to capture those dynamics. On the contrary, given how many hundreds of mortality series I've looked at over the last few years, I would say that the better their model fits just two years of data, the worse it is likely to fairly represent future outcomes. Otherwise, I do agree with you about then using that model to predict granular outcomes too far into the future. And even then, the disparities do not look that large to me to warrant a particular conclusion, even in the absence of confidence intervals which I imagine would easily cover the differences if they shared them with us?
Joel, I agree there is nothing wrong with backtesting, in fact it is a requirement. However, the chart does not present it as a backtest & a presentation of the errors of the model. It presents it as "Excess Death" which gives a FALSE impression of that 2018-19 was some remarkably stable period (no significant deviations in excess deaths up or down). I believe this false description of what they are actually presenting deceives the reader by making what happens in 2020 & beyond look so dramatic.
Let me put it this way, had 2018-19 experienced 2x the deaths of 2016-17, their chart would not have changed... they would have showed basically 0 excess death in 18-19, because they were using 18-19 as the base.
@Bart Miller - regarding your comment in the previous post. It may be that there are truly differences in outcomes of the age groups, or like I talk about in the 2nd section above, might just reflect different inaccuracies & biases of their forecasting model. Honestly, this paper needs to line a bird cage.
This sounds eerily similar to the Canadian study that said the unvaxxed were more likely to be in car accidents because of “Distrust of the government, a belief in freedom, misconceptions of daily risks, faith in natural protection, misinformation and personal beliefs”.
It's not even about good science, it's about having a link to post on Twitter.
I don't think there's anything wrong with their methodology like you suggest. It is common to backtest your model and present the results in order to demonstrate how good it is. In fact, if their 2018/19 data had been flat 0, it would indicate that their model was perfect. But, therein lies the rub, having a perfectly calibrated model for the training phase does not necessarily mean it will accurately predict the future. Recently, I have had to make this point more times than I should like. Unless the model makes some sort of allowance for trend and mean reversion, it isn't as useful as it might seem. In the case of deaths, this is pertinent as it manifests in the "dry tinder" or "pull-forward effect" depending on which way you look at it. Either, there is increased deaths after a period of lower than usual deaths or the opposite. So, two years is unlikely to capture those dynamics. On the contrary, given how many hundreds of mortality series I've looked at over the last few years, I would say that the better their model fits just two years of data, the worse it is likely to fairly represent future outcomes. Otherwise, I do agree with you about then using that model to predict granular outcomes too far into the future. And even then, the disparities do not look that large to me to warrant a particular conclusion, even in the absence of confidence intervals which I imagine would easily cover the differences if they shared them with us?
Joel, I agree there is nothing wrong with backtesting, in fact it is a requirement. However, the chart does not present it as a backtest & a presentation of the errors of the model. It presents it as "Excess Death" which gives a FALSE impression of that 2018-19 was some remarkably stable period (no significant deviations in excess deaths up or down). I believe this false description of what they are actually presenting deceives the reader by making what happens in 2020 & beyond look so dramatic.
Let me put it this way, had 2018-19 experienced 2x the deaths of 2016-17, their chart would not have changed... they would have showed basically 0 excess death in 18-19, because they were using 18-19 as the base.
Additionally, other than one sentence in the paper (with no quantitative information), there is no indication they even used a Test set of data.