Project Details
Projekt Print View

Measuring the Unmeasurable. Trust, validation, and the social organization of simulation modeling

Subject Area Theoretical Philosophy
Term from 2019 to 2023
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 435350290
 
Final Report Year 2024

Final Report Abstract

Computer simulation has become a standard instrument in science and engineering. Simulation models help to fit theories into concrete contexts of application. These models are famous for their adjustability and can accommodate large amounts of data. The most interesting cases are those in which simulations do not merely reproduce known data or phenomena but advance into the empirically unknown where data for comparison are not available. There are even many cases in which experimental evidence cannot be obtained, but models can be used for predictions. Then, simulation is used for “measuring the unmeasurable.” Starting from this observation, the main research questions are how simulation models are validated and how trust in them is created, maintained, and justified. It turned out that, even when researchers aim at simulating quantities they cannot measure, the match with measured data is the most important criterion for validation. If a model is able to simulate data with high precision, that is conducive to validation. If the data themselves are high precision, this is conducive to validation, too. Although in such cases data for comparison already exist at the time of simulation, matching these data counts as proxy for predictive success. On this basis, researchers tend to trust that their models can simulate (extrapolate) to cases that are not measurable. However, such trust is based on both predictive success, i.e. matching existing data, and theoretical reasoning that make extrapolation plausible. A third factor proved to be essential, too, namely software. Because many relevant applications require extensive code, the quality of code is an important factor. Researchers trust highly tested and well-established software packages. Of course, researchers have to modify existing code. As a rule, the less modification, the more trust. This importantly differs from older accounts of validation that prescribe how one should create valid code. Today, simulationists rely on established code rather than create such code – an effect of division of labor. However, the picture is complicated by the adjustable parameters of a model. The main insight gained from the project concerns this point: adjustable parameters have both positive and negative effects on validation and trust. Thus, researchers have to strike a delicate balance. When achieving a match with data, parameters create plasticity, which is a virtue. But there is also a downside, because the process of adjustment creates interdependencies between various components in the model. As a consequence, theoretical reasoning about model behavior and analysis of the functional role of sub-modules becomes hard, if possible at all. In short, usability and validation are conflicting goals. The project left beaten pathways because it examines cases from the engineering sciences where models have to perform in given (not idealized) contexts and because it embeds philosophical research in science and engineering.

Publications

 
 

Additional Information

Textvergrößerung und Kontrastanpassung