Which data are needed to justify complex, pde-based flow models on the catchment scale?
PhD Researcher: Marvin Höge
Supervisors: Wolfgang Nowak (University of Stuttgart), Thomas Wöhling (TU Dresden), Walter Illman (University of Waterloo)
For modeling catchment-scale flow processes in the subsurface coupled with surface waters, many model school exist. They differ vastly in complexity, ranging from conceptual hydrological models over distributed hydrological models up to fully-blown partial differential equation (pde)-based approaches. It is a well-known problem that, the more complex a model approach is, the more data are required to suitably constrain its increasing number of parameters and thus to make the model legitimate. The goal of this project is to answer the following driving questions:
- Can the data demand of highly-resolved pde-based models ever be satisfied with a reasonable extent of catchment investigation?
- Do we need to switch back to zonation-based or other concepts that use simple parameterizations for geological heterogeneity?
- Do we even need to revert to extremely simplified hydrological-style models?
- Can optimal collection of data help to feed the data demand at reasonable costs?
The approach is a clean uncertainty analysis of several competing models. The competing models will differ vastly in complexity (e.g., distributed hydrological versus zonation/pde-based versus geostatistics/pde-based). The uncertainty analysis will reveal major sources of uncertainty, featuring both parametric uncertainty and model-conceptual uncertainty. Using Bayesian analysis tools in synthetic across-model numerical studies, we will identify the required level of data availability that marks the transition point in model legitimacy between the competing models. While the above analysis is based on reasonable field investigation scenarios mimicked from real catchments, a second analysis will perform a formal optimization of data collection schemes. This will reveal whether different (or even optimal) data collection schemes are helpful for installing the operational legitimacy and adequacy of more complex models at lower data availability.
The novelty of this approach is twofold:
- A Bayesian model analysis has never been applied to models of vastly different complexity and different model schools in the field of hydro(geo)logy.
- The application of optimal design of experiments in this field is new as well.
This is fundamental method-oriented research. To begin with, it is restricted to flow only (no transport) and the use of synthetic numerical studies.
This topic is a continuation of research in the previous round of the graduate school (Optimal Design of Monitoring in Coupled Hydrosystems). It features the legitimacy of catchment-scale models with vastly different complexity, i.e. from different modeling schools (such as conceptual-hydrologic, distributed hydrologic, physics-based with partial differential equations). There will be direct synergies and close collaboration with neighboring topics within the IRTG (occupied by Matthias Loschko and Reynold Chow).
- Höge, M., Wöhling, T., Nowak, W. (2018): A Primer for Model Selection: The Decisive Role of Model Complexity. Water Resources Research, 54(3): 1688-1715, doi: 10.1002/2017WR021902
- Höge, M., Guthke, A., Nowak, W. (2019): The Hydrologist's Guide to Bayesian Model Selection, Averaging and Combination. Journal of Hydrology, 572, 96-107, doi: 10.1016/j.jhydrol.2019.01.072