How Is Water Modeling Like a Bird Striking a Window? They are both flying into a deadly future, but seeing it as not so different from the recent past.
I have this photo by my door, where I see it whenever I leave. It is a photo of a trace left when a bird struck a window. The collision leaves oils from the feathers, and over time, fine dust collects and reveals the imprint of a crash.
These bird strikes happen because outdoors is bright, the window reflects the outdoor scene, and indoors is darker and hard to see. From the bird’s point of view, what is in front of it is very much like what is behind it. It thinks the future will be very much like the recent past, so keep flying.
Computational models have the same hazard as that bird. The basic art of modeling is to look at a set of data, representing interacting variables. By examining how these variables influence each other and how much variability is in the system, the modeler makes predictions. Data models are created from decades of data and include events and trends we have experienced in that period. Models richly describe knowledge of the past, and project that into the future.
Quantitative models are key to managing water resources. They take weather patterns, water usage, and geology and make forecasts of the water available at various places in the system. If they are wrong, we may be in some serious trouble. Regions may run out of water for agriculture, for humans, and for all water-dependent ecosystems. Failure to predict means failure to deliver, which can harm everything and everyone dependent on clean fresh water. (Photo: Examples of different models.)
Models are computationally challenging and require people trained in computer science, geology, and hydrology to operate. They are very complex and sophisticated.
Yet, we are experiencing an increasing number of failures of these models.
There are models of sierra snowpack water content. This is important, as sierra snowpack is a critical water reservoir that provides fresh water through the dry summer season. Recent predictions have become so poor that the model and the modelers have come under question[1]. Likewise, the water model for the delta, which is critical for providing fresh water to the California Water Project, as well as assure survival of threatened species, has been cited for poor quality predictions[2].
The complex Napa sub-basin model forecasts stability of supply for decades without degradation of groundwater supply. Yet, that model didn’t predict the recent past of a string of years of overdraft of the aquifer. Conditions are different than the model understands; it couldn’t predict the obvious.
Each problematic model has its own set of issues and drivers, but one thing all models have in common is that they are built upon knowledge of systems that predict the past much better than the future. The past is richly detailed and well understood. The models lead the viewers to see the future as a simple extension of the past, rather than the dimly understood uncertain future. Models will fail because they project the past into the future, assuming it will be the same, with a few differences.
Modeling enterprises will be loath to acknowledge that the methods they employ, down to the core assumptions and computational methods, will have to be rethought out. They will have to bring new ways of thinking into the room and consider replacing the millions they’ve invested in software. Without rethinking how we model and forecast, expect that the modeling equivalent of the birdstrike will happen, and we hope that the consequences will not be severe.
So, be mindful that when experts start to give long reports with graphs and charts about model-based forecasting. They may be looking at a reflection rather than the future.