Skip to content

Digital Society Blog

Making sense of our connected world


Crowd Science

07 December 2016

Online platforms offer a variety of opportunities for volunteers to engage in the process of knowledge generation. How are Crowd Science projects brought into being and how do they work? An insight.

Setting up

Crowd science is scientific research that is conducted with the participation of volunteers who are not professional scientists. Crowd science can be seen as a method of scientific discovery, as a way of volunteer engagement in knowledge creation, or as a form of science communication. While the involvement of volunteers in scientific discovery is nothing new, the potentially large number of volunteers that can be involved to support data-rich or labour-intensive projects differs from forms of volunteer engagement that pre-date the internet and online platforms. In our open science team we wanted to gain a better understanding of how crowd science projects are set up. What are the objectives behind crowd science projects and what ways are there of accessing the crowd? What tasks do volunteers perform? Are there any quality assurance and feedback mechanisms? We focused our analysis on crowd science in Germany and conducted in-depth case studies with twelve crowd science projects. As part of our case studies we interviewed project managers or other involved individuals about their experiences of setting up and maintaining crowd science projects. Here is a short summary of the findings.


There are crowd science projects where the predominant objective is the generation of knowledge. These projects are typically set up by scientists who either use crowd science as a means of answering a research question or pursue a data-driven approach to a research topic.  For other crowd science projects the general interest in a topic is the key concern. These projects are typically set up by individuals with a passion for a topic and an ability to motivate others to engage with the topic.


There are different ways of accessing the crowd. The crowd building strategy is concerned with recruiting volunteers around a specific topic; this strategy is used by most crowd science projects. The crowd harnessing strategy relies on tapping into an already existing community; this strategy requires alignment of the projects’ objectives with the interests of the community. Another approach is to employ a large crowd of volunteers to try to reach a goal by trial and error; this effect-based approach is rare, but it carries the potential of breaking the typical pattern of a few dedicated individuals producing a substantial amount of the work while others contribute relatively little. Instead, it is conceivable that a crowd of one-time volunteers produces results of comparable quality to the output generated by power-volunteers.


Typical tasks involved in crowd science projects are annotating, collecting, and producing. Annotating refers to adding a form of metadata to existing data (for example tagging images). Collecting means gathering data of some sort (for example catching a mosquito). Producing involves creating new content (for example writing a text). Moreover, tasks performed by volunteers vary in their degree of complexity. In general, tasks that involve some form of annotating or collecting tend to be of simple or medium complexity, while tasks that involve some form of producing can rather be classified as hard. Breaking down tasks into manageable units is an essential part of crowd science and the basis for scaling it up.


Assuring the quality of data generated by the crowd is a challenge. Projects that involve simple tasks can to some extent use automated quality assurance mechanisms (for instance ‘double-keying’ whereby data need to be entered at least twice in the same form in order to be validated). Projects that deal with data generated from performing more complex tasks still rely on humans to ensure data quality, though this might change in the future thanks to advances in machine learning techniques.


Providing volunteers with feedback is an important tool for motivating and keeping them engaged. While crowd science projects that employ gamification approaches have inherent feedback mechanisms, other projects face the challenge of having to find forms of communicating with volunteers that are adequate to the value of their contributions.

Way to go

Crowd science opens up pathways for pursuing unconventional research ideas, blurs the boundaries between institutional science and civil society, provides opportunities for volunteer engagement in science, and enriches science communication. Crowd science also raises questions concerning data-driven approaches to scientific discovery as well as the development of mechanisms for automated quality assurance and feedback. While applying approaches such as gamification in crowd science projects seem promising, more strategies and best practice examples of how to make the best use of the potential of the crowd are needed.

This blog post is based on the following paper:
Scheliga K, Friesike S, Puschmann C and Fecher B (2016) Setting up crowd science projects. Public Understanding of Science. DOI: 10.1177/0963662516678514

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact

Kaja Scheliga

Former Associated Researcher: Learning, knowledge, innovation

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Titelbild European Platform Alternatives. Ein Schwimmbad mit zwei Sprungtürmen von oben.

European Platform Alternatives

When the Platform Alternatives project began its research of Europe’s platform economy in 2020, the team set out to understand the structural effects of the large American platforms and the strategies of their European competitors. What they found was a highly diverse and active landscape, where scaling at all cost and market domination were not necessarily core concerns. Now, two years on, the question of how to regulate large platforms still dominates the public and policy debates. The contributions gathered here, however, suggest that it would also be of societal value if mainstream discourse learned to take a closer look at the variety of organisational solutions of existing European platforms. Not only to regulate them better but also to help them prosper into true alternatives in the global market.

Discover all 5 articles

Further articles

Titelbild Blogpost: DUCAH, Tech and Aging, Technik und Altern

Tech and aging: How to enable independent living with digital innovations

The health and care sector faces a myriad of irresolvable challenges. The Digital Urban Center for Aging and Health (DUCAH) is developing digital solutions.

Titelbild Blogbeitrag: Deep Fakes

Deep fakes: the uncanniest variation of manipulated media content so far

Deep fakes are certainly not the first occurrence of manipulated media content. So what fuels this extraordinary feeling of uncanniness we associate with them?

Titelbild Blogpost Technological Pluriverse: Reflektierende Luftblasen vor einem hellgrünen Hintergrund.

Towards the Technological Pluriverse

How does the design practice of digital technology need to be fundamentally changed to create a more inclusive digital future? Adriaan Odendaal & Karla Zavala Barreda on creating the idea...