SozialwissenschaftlerInnen und Replikationen: Sagen Sie mir, was Sie wirklich denken!
In Zeiten steigender Publikationsraten und der stärkeren Ausdifferenzierung von wissenschaftlichen Disziplinen wird es schwerer Qualitätsstandards zu etablieren und systematisch zu überprüfen. Die Autoren argumentieren daher aus verschiedenen Gründen für mehr Wiederholungsstudien, die zum Beispiel auf den gleichen Daten beruhen. Dieser Blogpost basiert auf dem wissenschaftlichen Artikel, ”Perceptions and Practices of Replication by Social and Behavioral Scientists: Making Replications a Mandatory Element of Curricula Would Be Useful”.
In times of increasing publication rates and specialization of disciplines, it is particularly important for academia to reflect upon measures to safeguard the integrity of research, beyond the classical peer review. Empirical economics especially faces this challenge due to its responsibility towards society, but also because an increasing number of studies have called the reproducibility of findings into question (1–4). A prominent example is Reinhart and Rogoff’s study ”Growth in a Time of Debt” on the effectiveness of austerity-based fiscal policies for highly indebted economies (5). The results of the study clearly translated into politics although it was based on fundamental miscalculations, as demonstrated by a replication study by Herndon et al. (6).
Replication studies are important because they contribute to the self-correction abilities of the self-referential scientific ecosystem. Moreover, ”low cost” replication studies that use the primary investigator’s original dataset, seem increasingly feasible considering pressure by funding agencies and science policy makers to make research data available (7, 8). Nonetheless, replication studies are rarely conducted (9).
To better understand researchers views towards replication, we surveyed the perceptions and replication practices of 300 social and behavioral scientists who use data from the German Socio-Economic Panel Study (SOEP), a widely analyzed multi-cohort study of the German population (10).
84 per cent of the surveyed researchers agree that replications are necessary for improving scientific output and 71 per cent disagree with the statement that replications are not worthwhile because major mistakes will be found at some point anyway.
58 per cent of our respondents never attempted a replication despite the fact that SOEP data is easily obtained, well-documented and frequently analyzed. Of those respondents who had conducted a replication study in the past, more than half of them were conducted during regular coursework – either while teaching a class (13% of all respondents) or while being taught as a student (9%). 20% of the respondents used a replication of a SOEP article for their own research. Of those who never conducted a replication study, 76% never saw a need to do so, while the rest thought it would be too time consuming (15%) or did not have enough information (9%)— either about the data, the software or the way results in the original article were produced, i.e., the scripts—were not available.
As for those who did replicate a SOEP article, 84% were able to reproduce the results of the original article (although the results were not always exactly identical to those found by the original authors), while only 16% were not able to do so. When asked about the reason why the results could not be completely replicated, 69% of the respondents stated that the information about details of the analysis in the original article was insufficient.
The situation regarding replications can be regarded as a ”tragedy of the commons”: everybody knows that they are useful, but almost everybody counts on others to conduct them. A possible explanation for this is that conducting replication studies is not worthwhile in the context of the academic reward system since they are time-consuming and rarely published (9). Previous research showed that impact considerations are already steering replication efforts (11, 12). For instance, researchers target high impact studies. Nevertheless, the number of replication studies is still considerably low. Against this background, we argue that more replications would be conducted if they received more formal recognition (e.g., journals could adapt their policies and publish more replication studies (13)). Our results furthermore show that most of the replication studies are conducted in the context of teaching. In our view, this is a promising detail: in order to increase the number of replication studies, it seems useful to make replications a mandatory part of curricula and an optional chapter of (cumulative) doctoral theses.
Benedikt Fecher is a doctoral student at the German Institute of Economic Research and the Alexander von Humboldt Institute for Internet and Society. Mathis Fräßdorf is Head of the Department for Scientific Information at Wissenshaftszentrum Berlin für Sozialforschung. Gert Wagner is Professor of Economics at the Berlin University of Technology. Correspondence about this blog should be directed to Benedikt Fecher at firstname.lastname@example.org.
This blog post was first published on The Replication Network.
(1) R. G. Anderson, A. Kichkha, Replication versus Meta-Analysis in Economics: Where Do We Stand 30 Years After Dewald, Thursby and Anderson? (2017).
(2) C. F. Camerer et al., Evaluating replicability of laboratory experiments in economics. Science (2016), doi:10.1126/science.aaf0918.
(3) W. G. Dewald, J. G. Thursby, R. G. Anderson, Replication in Empirical Economics: The Journal of Money, Credit and Banking Project. The American Economic Review. 76, 587–603 (1986).
(4) M. Duvendack, R. Jones, R. Reed, What is Meant by “Replication” and Why Does It Encounter Such Resistance in Economics? (2017).
(5) C. Reinhart, K. Rogoff, “Growth in a Time of Debt” (w15639, National Bureau of Economic Research, Cambridge, MA, 2010).
(6) T. Herndon, M. Ash, R. Pollin, Does high public debt consistently stifle economic growth? A critique of Reinhart and Rogoff. Cambridge Journal of Economics. 38, 257–279 (2013).
(7) M. McNutt, Reproducibility. Science. 343, 229–229 (2014).
(8) B. Fecher, G. G. Wagner, A research symbiont. Science. 351, 1405–1406 (2016).
(9) C. L. Park, What is the value of replicating other studies? Research Evaluation. 13, 189–195 (2004).
(10) DIW Berlin, Übersicht über das SOEP (2015).
(11) D. Hamermesh, What is Replication? The Possibly Exemplary Example of Labor Economics (2017).
(12) S. Sukhtankar, Replications in Development (2017).
(13) J. H. Hoeffler, Replication and Economics Journal Policies (2017).
Dieser Beitrag spiegelt die Meinung des Autors und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte email@example.com
Bleiben Sie in Kontakt
und melden Sie sich für unseren monatlichen Newsletter mit den neusten Blogartikeln an.
JOURNALS DES HIIG
Wer trägt die strafrechtliche Verantwortung, wenn ein autonomes Fahrzeug in einen Unfall verwickelt ist, wie dem in Arizona letzten Jahres? Zu den potentiellen Verdächtigen gehören: die NutzerInnen, das Softwareunternehmen, die...
Online-Marketing ist für viele Unternehmen ein wichtiger Vertriebskanal. Um jedoch Kampagnen effizient zu gestalten, müssen oft große Datenmengen analysiert werden. KI-basierte Lösungen wie die Software Adspert der Bidmanagement GmbH in...
KI-Startups in Deutschland tragen maßgeblich dazu bei, das wirtschaftliche Potential von KI für den Standort Deutschland durch innovative Produkte, Dienstleistungen und Geschäftsmodelle voll auszuschöpfen. Jessica Schmeiss und Nicolas Friederici analysieren...