It’s still early days for crowdsourced citizens science projects. For all of their achievements there is still room for improvement. Recent research points the way towards developing more engaging projects.
The Institute of Physics podcast Physics World interviewed Zooniverse co-founder Chris Lintott. After reviewing the Galaxy Zoo’s origins and discussing the opportunities citizen science creates for scientists, Lintott addressed some of the areas where crowdsourcing science projects need to improve engagement and communication with the projects’ volunteers.
That need was reinforced in an independent study of Zooniverse’s volunteers by Georgia Tech business school professor Henry Sauermann. He looked at participation levels in seven Zooniverse projects over a six-month period. Sauermann reported that, while the projects’ volunteers contributed the equivalent of $1,500,000 in labor, most of the work is done by a small share of those volunteers. Sauermann believes in the power of crowdsourced science, but said in the Georgia Tech press release:
Caren Cooper reacted to Sauermann’s report on her Plos CitizenSci blog arguing that the strength of citizen science lies in its broad-but-shallow reach. Even the smallest contribution, she maintains, advances scientific research to accomplish great things. In addition the accessibility that makes it so easy for people to dip in and out of projects lets people find projects that they can be passionate about. Sauermann's paper lies behind the Proceedings of the National Academy of Sciences paywall (DOI: 10.1073/pnas.1408907112, $10 for 2 day access) but the Georgia Tech press release has a detailed summary of his results.
Two Brazilian researchers have taken a first step towards improving the engagement of crowdsourcing volunteers. Lesandro Ponciano and Francisco Brasileiro with the Universidade Federal de Campina Grande used data from Galaxy Zoo and the Milky Way Project to classify volunteers’ behavior into 5 categories: hardworking, spasmodic, persistent, lasting, and moderate. The hardworkers, for example, quickly make many contributions but soon leave the project. They represent ~20% of all volunteers but only account for ~9% of the time contributed to the projects. Persistent volunteers, on the other hand, contributed ~40% of the time but represent only ~13% of all volunteers. The authors believe that citizen science project designers can use the profiles to more effectively recruit and retain volunteers. You can read their results in Human Computation (open access) as well as their arXiv preprint (1501.02134)
A goal of some crowdsourced citizen science projects is to create more efficient automated software. By training the software to perform its task better, the volunteers let the scientists scan even larger datasets. Lawrence Technological University undergraduate Evan Kuminski and computer science professor Lior Shamir described their use of Galaxy Zoo classifications to test their galaxy-classifying software algorithm in a poster presentation to the AAS. Their algorithm isn’t ready to replace citizen scientists yet. While it reached 95% accuracy in some areas, in others it only matched the citizen science results 36% of the time. Yet it is this kind of back and forth between software and volunteers that will let scientists analyze the enormous data sets future telescopes produce while continuing citizen scientists’ crucial role in astronomy research. Their research lies behind the Publications of the Astronomical Society of the Pacific paywall (DOI: 10.1086/678977, $19) but you can read the arXiv preprint (1409.7935) for more information.