Categories
Blog

Combining computer models enhances their accuracy

Computer models are frequently used to aid landscape decisions, but the complexity of the world around us makes it impossible to accurately account for all of the patterns and processes that might affect outcomes. Drawing on new research, Prof Simon Willcock, Prof James M. Bullock and Dr Danny A. P. Hooftman explain how combinations – or ‘ensembles’ – of models can produce more accurate and reliable information.

In an effort to make predictions and avoid negative impacts (e.g. from climate change or Covid-19), scientists and decision-makers often turn to computer models. However, the real-world is so complex that it is impossible to represent all patterns and processes accurately within a model. So, models are always simplified versions of the systems they are intended to represent. How well do these simplified models represent reality? Can we use them to support real-world decisions?

Scientific models are generally produced by subject-specialists who have dedicated large portions of their careers to a better of understanding their system of interest. As such, one might expect models to draw together state-of-the-art knowledge to fully describe a system and how that system may respond to shocks.

With perfect data availability, this might be achievable. Instead, models are useful when our knowledge is imperfect – e.g. to extrapolate into a new region (e.g. estimating how new Covid-19 strains might impact the UK, based on their impacts in their country of origin) or to a plausible, but unknown, future (e.g. pre-empting the impacts of global temperature levels exceeding values not seen on this planet for millions of years).

As such, scientific models might better be viewed as a (very well) educated guess, than fact. In this manner, each model could be thought of as representing the expert opinion of one or more scientists. Indeed, even when representing the same thing, it is rare for models to completely agree. Rather, like when eliciting opinions, some models are in closer agreement than others. In part, this might arise because the models are very similar (akin to the fact that you might get similar opinions when asking members of the same family). Thus, agreement between independent models tends to carry more weight. Similarly, just as you might be more likely to trust an individual whose advice has proven true in the past, models that more closely match the data we do have – a process known as validation – should be trusted more.

Models are often used to calculate and map the many benefits that people derive from nature (ranging from food and water to slowing climate change by absorbing and storing carbon). For example, we may need to know the areas that provide the greatest benefits (and thus warrant the most protection). The above figure shows different model outputs for Derbyshire, UK. The leftmost and central boxes show two individual models, both highlighting areas where ecosystem conservation might be prioritised. However, these models disagree, with minimal overlap in the areas recommended for protection. Due to limited conservation budgets, and the opportunity costs that would result from protecting large areas, it is impractical to follow the precautionary principle and conserve all areas recommended by each model.

Like if we had two friends that disagreed, we can turn to more friends/models in an effort to achieve something close to consensus. The rightmost image shows the results of combining the ‘opinions’ of a number of individual models – collectively called an ensemble of models. Indeed, a recent paper has shown that using ensembles of models in this way is more accurate (i.e. more likely to correctly predict the areas where nature’s contributions to people – termed ecosystem services – are greatest).

Because of this, ensembles of models are widely used (e.g. by the Intergovernmental Panel on Climate Change) to provide the best possible decision support. However, use of ensembles is not universal. For example, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services sometimes relies on individual models to support decisions, which should perhaps be viewed with more caution. As with the old fable that individual sticks can be easily broken, but sticks in a bundle can’t be, the same is true with models – with groups of models combined in an ensemble likely to be stronger, more accurate and more reliable.

♣♣♣

Prof Simon Willcock is a Professor of Sustainability at Bangor University, and Principal Research Scientist at Rothamsted Research.

Prof James M. Bullock is an Ecologist at the UK Centre for Ecology and Hydrology.

Dr Danny A. P. Hooftman is an Applied Ecologist and owner of Lactuca: Environmental Data Analyses and Modelling.