Tuesday, October 13, 2020

PNSQC 2020 Live Blog: Testing COVID-19 Models: Getting Important Work Done In A Hurry with Christian Wiswell




Now, this is interesting and timely :). I had wondered if there would be sessions specific to the biggest challenge we have faced in decades as far as health and public safety is concerned. Looks like I was not to be disappointed.
 



COVID-19 has disrupted a large number of things for a lot of people and the odds are high it will continue to do so for a long time. However, how are we able to make these determinations? How do we respond to these situations? How do we determine where to intervene? To do this, we need to use and refine computational models. I confess I have always wondered how these models are created and more to the point, how would we test these computational models?

To be frank, the idea of testing this is hard for me to get my head around. How do we determine if the model created will map to reality? It seems to me that being able to track data and getting actual results seems to be a constant chase for data. Christian points to the fact that getting a point for point confirmation is impossible. So how does this work?

The key to this is that going from exposure to infected state, to treatment time, to recovered state, is logged and is tracked. based on these values, statistical tests are created and run. These tests are run multiple times and they are tracked to see how they behave. In short, there is no genuinely "true" test, but there is a way to confirm that the results are predictable and consistent. It takes a lot of time and testing. Over time, we get used to how these tests run and we get a general example for how a disease behaves. COVID-19, however, acted in ways that these models were not totally prepared for. While we had warnings in November 2019, the modeling scenarios for other diseases were inadequate for the novel Coronavirus. It behaved differently enough where the software models were not up to the task, as well as finding the team in lockdown because of the spread of the disease (wow, that's a pressure I've never had to deal with).

Covasim is what it sounds like, it was a Covid simulator with custom code that Christian had not worked with before. there were unknown dependencies and properties for the code was going to take time to learn about. To start, there were a variety of parameters that could extend to support classes. One challenge was to be able to change the vocabulary so that code tokens were more easily spottable (such as mapping the variable n to "NumberofInitialParticipants"). the first test was a way to get data about the random seed). The test to confirm the random number being pulled before the random number was populated helped to determine if the tests could be repeatable. One of the most important things to look at was the "test exposure to infectiousness delay deviation scaling". In short, how long did it take from the time of exposure to the time of being infections, whether or not symptoms were showing? Over time, with more data and more tests, the confirmation that the general mean of the experiments and tracking data became more consistent. In general, it meant that more people would be able to be predicted as to when they would become infected and be infectious. 

One of the key things to realize is that you are never testing if the software is correct or behaves correctly, but whether or not the configuration is predictable and the model matches real-world behavior over time. It's also important to configure unrealistic scenarios so as to see if the outputs are either in line with the values provided of that the outliers are so out that the exceptions actually prove the rule.

I confess freely this level of testing is outside of my wheelhouse but I'm greatly appreciative for this talk. I've always wondered how these tests were determined. I now feel I have a better understanding of that, if only a little ;).      


No comments: