The study investigates analytical robustness in the social and behavioral sciences by having multiple independent teams reanalyze the same datasets. By examining 100 studies, the authors found that researchers' individual choices in data processing and statistical modeling significantly affect both numerical results and final scientific conclusions. While 74% of reanalyses supported the original claims, only 34% produced nearly identical statistical values, revealing a high degree of analytical variability. The study suggests that findings are often contingent on a single analytical path, which may lead to an overestimation of an effect's certainty. To address this, the authors advocate transparent practices, such as multi-analyst projects and multiverse analyses, to better communicate scientific uncertainty. Consequently, the report concludes that the field must move beyond “single-shot” analyses to ensure that empirical evidence is truly robust and objective.
The paper (https://www.nature.com/articles/s41586-025-09844-9), co-authored by a total of 457 researchers, is part of a package of papers describing how methods and analysts shape results in behavioral and social sciences.