EPIDEMIOLOGY aims to identify causes of disease in animal or human populations, and identify methods for their control and prevention. This article talks a little about epidemic models and how we build them.
The key word here is populations: it is not always easy to predict how a whole population will behave by studying a single individual. Laboratory study results may not extend to field conditions, and the scope of field trials may be limited by costs and logistics. And Donald Rumsfeld was right to worry about "known unknowns" and "unknown unknowns": How do we know a new health-management plan will work under farm conditions we haven't tested, or even envisaged? Often, we can't, but with modelling, we can at least make predictions.
In epidemic modelling, we build a mathematical description of the disease process, reflecting how we believe the system works. A simple example SIR model is shown in Fig. 1. Here, fish are divided into Susceptible, Infected, and Recovered: Infected fish gradually recover and cannot be re-infected (or they die—the maths is the same), but meanwhile, the more infected fish are present, the faster the infection rate is for those not already infected. This simple model has two values we need to estimate from real data (parameters): the rate at which infected fish infect susceptible ones, and the rate at which fish cease to be infectious.
Once built, a model needs testing before it can be used. When validating a model, we test that it fits the data from which it was built. However, more importantly, we must test it against other data. Failure to do so can leave us with a model that describes one data set well, but predicts other data sets poorly. This distinction between describing and predicting is often overlooked.
George Box is frequently quoted as saying that "all models are wrong", but "some are useful". In sensitivity analysis we test our model to investigate how its results are changed when it is altered. If results vary greatly with modest changes in model parameters, then these parameters will benefit from further study and experimental data. Likewise, if the model is insensitive to a parameter, we can be more relaxed that our conclusions are not critically dependent on providing its "correct" value. In the example SIR model, increasing infection and recovery rates both alter the time-course and severity of the modelled epidemic (Fig 2).
Often, investigating control strategies is a similar process to sensitivity analysis. We can amend some aspects of the model (e.g. reduce the infection rate), rerun it, and evaluate the predicted outcomes for an improve result.
This simple SIR model cannot describe in detail every aspect of disease spread. We seek a balance between such simplistic models, to be used strategically, and more complex mechanistic models designed for particular diseases. These are useful where much is already known about the disease, and may benefit from better predictive power. For example, Fig. 3 shows output from a model currently under development for ich (white spot) in rainbow trout (DM Green, AP Shinn and NGH Taylor, ongoing work). Here, both host and parasite populations, as well as disease control methods, are modelled in a more detailed fashion.
In summary, epidemic modelling is a useful tool for studying disease control in conjunction with field and lab studies, and can both make predictions and pose hypotheses for further study.