Recent from talks
Nothing was collected or created yet.
Crowdsourcing
View on WikipediaThis article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. (September 2022) |

Crowdsourcing involves a large group of dispersed participants contributing or producing goods or services—including ideas, votes, micro-tasks, and finances—for payment or as volunteers. Contemporary crowdsourcing often involves digital platforms to attract and divide work between participants to achieve a cumulative result. Crowdsourcing is not limited to online activity, however, and there are various historical examples of crowdsourcing. The word crowdsourcing is a portmanteau of "crowd" and "outsourcing".[1][2][3] In contrast to outsourcing, crowdsourcing usually involves less specific and more public groups of participants.[4][5][6]
Advantages of using crowdsourcing include lowered costs, improved speed, improved quality, increased flexibility, and/or increased scalability of the work, as well as promoting diversity.[7][8] Crowdsourcing methods include competitions, virtual labor markets, open online collaboration and data donation.[8][9][10][11] Some forms of crowdsourcing, such as in "idea competitions" or "innovation contests" provide ways for organizations to learn beyond the "base of minds" provided by their employees (e.g. Lego Ideas).[12][13][promotion?] Commercial platforms, such as Amazon Mechanical Turk, match microtasks submitted by requesters to workers who perform them. Crowdsourcing is also used by nonprofit organizations to develop common goods, such as Wikipedia.[14]
Definitions
[edit]The term crowdsourcing was coined in 2006 by two editors at Wired, Jeff Howe and Mark Robinson, to describe how businesses were using the Internet to "outsource work to the crowd", which quickly led to the portmanteau "crowdsourcing".[15] The Oxford English Dictionary gives a first use: "OED's earliest evidence for crowdsourcing is from 2006, in the writing of J. Howe."[16] The online dictionary Merriam-Webster defines it as: "the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people and especially from the online community rather than from traditional employees or suppliers."[17]
Daren C. Brabham defined crowdsourcing as an "online, distributed problem-solving and production model."[18] Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.[19]
Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem.[original research?] Members of the public submit solutions that are then owned by the entity who originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or public recognition. In other cases, the only rewards may be praise or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time, from experts, or from small businesses.[15]
Historical examples
[edit]While the term "crowdsourcing" was popularized online to describe Internet-based activities,[18] some examples of projects, in retrospect, can be described as crowdsourcing.
Timeline of crowdsourcing examples
[edit]- 618–907 – The Tang dynasty of China introduced the joint-stock company, the earliest form of crowdfunding. This was evident during the cold period of the Tang Dynasty when the colder climates resulted in poor harvests and the lessening of agricultural taxes, culminating in the fragmentation of the agricultural sector.[20] The fragmentation meant that the government had to reform the tax system relying more on the taxation of salt and most importantly business leading to the creation of the Joint-Stock Company.[20]
- 1567 – King Philip II of Spain offered a cash prize for calculating the longitude of a vessel while at sea.[21]
- 1714 – The longitude rewards: When the British government was trying to find a way to measure a ship's longitudinal position, they offered the public a monetary prize to whoever came up with the best solution.[22][23]
- 1783 – King Louis XVI offered an award to the person who could "make the alkali" by decomposing sea salt by the "simplest and most economic method".[22]
- 1848 – Matthew Fontaine Maury distributed 5000 copies of his Wind and Current Charts free of charge on the condition that sailors returned a standardized log of their voyage to the U.S. Naval Observatory. By 1861, he had distributed 200,000 copies free of charge, on the same conditions.[24]
- 1849 – A network of some 150 volunteer weather observers all over the USA was set up as a part of the Smithsonian Institution's Meteorological Project started by the Smithsonian's first Secretary, Joseph Henry, who used the telegraph to gather volunteers' data and create a large weather map, making new information available to the public daily. For instance, volunteers tracked a tornado passing through Wisconsin and sent the findings via telegraph to the Smithsonian. Henry's project is considered the origin of what later became the National Weather Service. Within a decade, the project had more than 600 volunteer observers and had spread to Canada, Mexico, Latin America, and the Caribbean.[25]
- 1884 – Publication of the Oxford English Dictionary: 800 volunteers catalogued words to create the first fascicle of the OED.[22]
- 1916 – Planters Peanuts contest: The Mr. Peanut logo was designed by a 14-year-old boy who won the Planter Peanuts logo contest.[22]
- 1957 – Jørn Utzon was selected as winner of the design competition for the Sydney Opera House.[22]
- 1970 – French amateur photo contest C'était Paris en 1970 ("This Was Paris in 1970") was sponsored by the city of Paris, France-Inter radio, and the Fnac: 14,000 photographers produced 70,000 black-and-white prints and 30,000 color slides of the French capital to document the architectural changes of Paris. Photographs were donated to the Bibliothèque historique de la ville de Paris.[26]
- 1979 – Robert Axelrod invited academics on-line to submit FORTRAN algorithms to play the repeated Prisoner's Dilemma; A tit for tat algorithm ended up in first place.[27]
- 1981 - Jilly Cooper gathered stories about mongrels for the book Intelligent and Loyal, by putting an advert in newspapers asking people to share stories about their pets for the book.[28][29]
- 1983 – Richard Stallman began work on the GNU operating system. Programmers fromaround the world contribute to the GNU operating system. Linux kernel is one of the kernels used in this operating system, thus forming the GNU/Linux operating system, which many people call as Linux.
- 1996 – The Hollywood Stock Exchange was founded: It allowed buying and selling of shares.[22]
- 1997 – British rock band Marillion raised $60,000 from their fans to help finance their U.S. tour.[22]
- 1999 – SETI@home was launched by the University of California, Berkeley. Volunteers can contribute to searching for signals that might come from extraterrestrial intelligence by installing a program that uses idle computer time for analyzing chunks of data recorded by radio telescopes involved in the SERENDIP program.[30]
- 1999– The U.S. Geological Survey's (USGS's) "Did You Feel It?" website was used in the US as a method where by residents could report any tremors or shocks they felt from a recent earthquake and the approximate magnitude of the earthquake.[31]
- 2000 – JustGiving was established: This online platform allows the public to help raise money for charities.[22]
- 2000 – UNV Online Volunteering service launched: Connecting people who commit their time and skills over the Internet to help organizations address development challenges.[32]
- 2000 – iStockPhoto was founded: The free stock imagery website allows the public to contribute to and receive commission for their contributions.[33]
- 2001 – Launch of Wikipedia: "Free-access, free content Internet encyclopedia".[34]
- 2001 – Foundation of Topcoder – crowdsourcing software development company.[35][36]
- 2004 – OpenStreetMap, a collaborative project to create a free editable map of the world, was launched.[37][38]
- 2004 – Toyota's first "Dream car art" contest: Children were asked globally to draw their "dream car of the future".[39]
- 2005 – Kodak's "Go for the Gold" contest: Kodak asked anyone to submit a picture of a personal victory.[39]
- 2005 – Amazon Mechanical Turk (MTurk) was launched publicly on November 2, 2005. It enables businesses to hire remotely located "crowdworkers" to perform discrete on-demand tasks that computers are currently unable to do.[40]
- 2005 – Reddit was launched in 2005.[41] Reddit is a social media platform and online community where users can submit, discuss and vote, leading to diverse discussions and interactions.
- 2009 – Waze (then named FreeMap Israel), a community-oriented GPS app, was created.[42] It allows users to submit road information and route data based on location, such as reports of car accidents or traffic, and integrates that data into its routing algorithms for all users of the app.
- 2010 - Following the Deepwater Horizon oil spill, BP initiated a crowdsourcing effort called the "Deepwater Horizon Response," inviting external experts and the public to submit innovative ideas and technical solutions for containing and cleaning up the massive oil spill. This initiative aimed to leverage collective intelligence to address the unprecedented environmental disaster.[43]
- 2010 – The 1947 Partition Archive, an oral history project that asked community members around the world to document oral histories from aging witnesses of a significant but under-documented historical event, the 1947 Partition of India, was founded.[44]
- 2011 – Casting of Flavours (Do us a flavor in the USA) – a campaign launched by PepsiCo's Lay's in Spain. The campaign was to create a new flavor for the snack where the consumers were directly involved in its formation.[45]
- 2012 - Open Food Facts, a collaborative project to create a libre encyclopedia of food products in the world using smartphones, is launched, followed by extensions on cosmetics, pet food, other products and prices.
Early competitions
[edit]Crowdsourcing has often been used in the past as a competition to discover a solution. The French government proposed several of these competitions, often rewarded with Montyon Prizes.[46] These included the Leblanc process, or the Alkali prize, where a reward was provided for separating the salt from the alkali, and the Fourneyron's turbine, when the first hydraulic commercial turbine was developed.[47]
In response to a challenge from the French government, Nicolas Appert won a prize for inventing a new way of food preservation that involved sealing food in air-tight jars.[48] The British government provided a similar reward to find an easy way to determine a ship's longitude in the Longitude Prize. During the Great Depression, out-of-work clerks tabulated higher mathematical functions in the Mathematical Tables Project as an outreach project.[49][unreliable source?] One of the largest crowdsourcing campaigns was a public design contest in 2010 hosted by the Indian government's finance ministry to create a symbol for the Indian rupee. Thousands of people sent in entries before the government zeroed in on the final symbol based on the Devanagari script using the letter Ra.[50]
Applications
[edit]A number of motivations exist for businesses to use crowdsourcing to accomplish their tasks. These include the ability to offload peak demand, access cheap labor and information, generate better results, access a wider array of talent than what is present in one organization, and undertake problems that would have been too difficult to solve internally.[51] Crowdsourcing allows businesses to submit problems on which contributors can work—on topics such as science, manufacturing, biotech, and medicine—optionally with monetary rewards for successful solutions. Although crowdsourcing complicated tasks can be difficult, simple work tasks[specify] can be crowdsourced cheaply and effectively.[52]
Crowdsourcing also has the potential to be a problem-solving mechanism for government and nonprofit use.[53] Urban and transit planning are prime areas for crowdsourcing. For example, from 2008 to 2009, a crowdsourcing project for transit planning in Salt Lake City was created to test the public participation process.[54] Another notable application of crowdsourcing for government problem-solving is Peer-to-Patent, which was an initiative to improve patent quality in the United States through gathering public input in a structured, productive manner.[55]
Researchers have used crowdsourcing systems such as Amazon Mechanical Turk or CloudResearch to aid their research projects by crowdsourcing some aspects of the research process, such as data collection, parsing, and evaluation to the public. Notable examples include using the crowd to create speech and language databases,[56][57] to conduct user studies,[58] and to run behavioral science surveys and experiments.[59] Crowdsourcing systems provided researchers with the ability to gather large amounts of data, and helped researchers to collect data from populations and demographics they may not have access to locally.[60][failed verification]
Artists have also used crowdsourcing systems. In a project called the Sheep Market, Aaron Koblin used Mechanical Turk to collect 10,000 drawings of sheep from contributors around the world.[61] Artist Sam Brown leveraged the crowd by asking visitors of his website explodingdog to send him sentences to use as inspirations for his paintings.[62] Art curator Andrea Grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized.[63] As with other types of uses, artists use crowdsourcing systems to generate and collect data. The crowd also can be used to provide inspiration and to collect financial support for an artist's work.[64]
In navigation systems, crowdsourcing from 100 million drivers were used by INRIX to collect users' driving times to provide better GPS routing and real-time traffic updates.[65]
In healthcare
[edit]The use of crowdsourcing in medical and health research is increasing systematically. The process involves outsourcing tasks or gathering input from a large, diverse groups of people, often facilitated through digital platforms, to contribute to medical research, diagnostics, data analysis, promotion, and various healthcare-related initiatives. Usage of this innovative approach supplies a useful community-based method to improve medical services.
From funding individual medical cases and innovative devices to supporting research, community health initiatives, and crisis responses, crowdsourcing proves its versatile impact in addressing diverse healthcare challenges.[66]
In 2011, UNAIDS initiated the participatory online policy project to better engage young people in decision-making processes related to AIDS.[67] The project acquired data from 3,497 participants across seventy-nine countries through online and offline forums. The outcomes generally emphasized the importance of youth perspectives in shaping strategies to effectively address AIDS which provided a valuable insight for future community empowerment initiatives.
Another approach is sourcing results of clinical algorithms from collective input of participants.[68] Researchers from SPIE developed a crowdsourcing tool, to train individuals, especially middle and high school students in South Korea, to diagnose malaria-infected red blood cells. Using a statistical framework, the platform combined expert diagnoses with those from minimally trained individuals, creating a gold standard library. The objective was to swiftly teach people to achieve great diagnosis accuracy without any prior training.
Cancer medicine journal conducted a review of the studies published between January 2005 and June 2016 on crowdsourcing in cancer research, with the usage PubMed, CINAHL, Scopus, PsychINFO, and Embase.[69] All of them strongly advocate for continuous efforts to refine and expand crowdsourcing applications in academic scholarship. Analysis highlighted the importance of interdisciplinary collaborations and widespread dissemination of knowledge; the review underscored the need to fully harness crowdsourcing's potential to address challenges within cancer research.[69]
In science
[edit]Astronomy
[edit]Crowdsourcing in astronomy was used in the early 19th century by astronomer Denison Olmsted. After being awakened in a late November night due to a meteor shower taking place, Olmsted noticed a pattern in the shooting stars. Olmsted wrote a brief report of this meteor shower in the local newspaper. "As the cause of 'Falling Stars' is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible", Olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. Responses came pouring in from many states, along with scientists' observations sent to the American Journal of Science and Arts.[70] These responses helped him to make a series of scientific breakthroughs including observing the fact that meteor showers are seen nationwide and fall from space under the influence of gravity. The responses also allowed him to approximate a velocity for the meteors.[71]
A more recent version of crowdsourcing in astronomy is NASA's photo organizing project,[72] which asked internet users to browse photos taken from space and try to identify the location the picture is documenting.[73]
Behavioral science
In the field of behavioral science, crowdsourcing is often used to gather data and insights on human behavior and decision making. Researchers may create online surveys or experiments that are completed by a large number of participants, allowing them to collect a diverse and potentially large amount of data.[59] Crowdsourcing can also be used to gather real-time data on behavior, such as through the use of mobile apps that track and record users' activities and decision making.[74] The use of crowdsourcing in behavioral science has the potential to greatly increase the scope and efficiency of research, and has been used in studies on topics such as psychology research,[75] political attitudes,[76] and social media use.[77]
Energy system research
[edit]Energy system models require large and diverse datasets, increasingly so given the trend towards greater temporal and spatial resolution.[78] In response, there have been several initiatives to crowdsource this data. Launched in December 2009, OpenEI is a collaborative website run by the US government that provides open energy data.[79][80] While much of its information is from US government sources, the platform also seeks crowdsourced input from around the world.[81] The semantic wiki and database Enipedia also publishes energy systems data using the concept of crowdsourced open information. Enipedia went live in March 2011.[82][83]: 184–188
Genealogy research
[edit]Genealogical research used crowdsourcing techniques long before personal computers were common. Beginning in 1942, members of the Church of Jesus Christ of Latter-day Saints encouraged members to submit information about their ancestors. The submitted information was gathered together into a single collection. In 1969, to encourage more participation, the church started the three-generation program. In this program, church members were asked to prepare documented family group record forms for the first three generations. The program was later expanded to encourage members to research at least four generations and became known as the four-generation program.[84]
Institutes that have records of interest to genealogical research have used crowds of volunteers to create catalogs and indices to records.[citation needed]
Genetic genealogy research
Genetic genealogy is a combination of traditional genealogy with genetics. The rise of personal DNA testing, after the turn of the century, by companies such as Gene by Gene, FTDNA, GeneTree, 23andMe, and Ancestry.com, has led to public and semi public databases of DNA testing using crowdsourcing techniques. Citizen science projects have included support, organization, and dissemination of personal DNA (genetic) testing. Similar to amateur astronomy, citizen scientists encouraged by volunteer organizations like the International Society of Genetic Genealogy[85] have provided valuable information and research to the professional scientific community.[86] The Genographic Project, which began in 2005, is a research project carried out by the National Geographic Society's scientific team to reveal patterns of human migration using crowdsourced DNA testing and reporting of results.[87]
Ornithology
[edit]Another early example of crowdsourcing occurred in the field of ornithology. On 25 December 1900, Frank Chapman, an early officer of the National Audubon Society, initiated a tradition dubbed the "Christmas Day Bird Census". The project called birders from across North America to count and record the number of birds in each species they witnessed on Christmas Day. The project was successful, and the records from 27 different contributors were compiled into one bird census, which tallied around 90 species of birds.[88] This large-scale collection of data constituted an early form of citizen science, the premise upon which crowdsourcing is based. In the 2012 census, more than 70,000 individuals participated across 2,369 bird count circles.[89] Christmas 2014 marked the National Audubon Society's 115th annual Christmas Bird Count.
The European-Mediterranean Seismological Centre (EMSC) has developed a seismic detection system by monitoring the traffic peaks on its website and analyzing keywords used on Twitter.[90]
In journalism
[edit]Crowdsourcing is increasingly used in professional journalism. Journalists are able to organize crowdsourced information by fact checking the information, and then using the information they have gathered in their articles as they see fit.[citation needed] A daily newspaper in Sweden has successfully used crowdsourcing in investigating the home loan interest rates in the country in 2013–2014, which resulted in over 50,000 submissions.[91] A daily newspaper in Finland crowdsourced an investigation into stock short-selling in 2011–2012, and the crowdsourced information led to revelations of a tax evasion system by a Finnish bank. The bank executive was fired and policy changes followed.[92] TalkingPointsMemo in the United States asked its readers to examine 3,000 emails concerning the firing of federal prosecutors in 2008. The British newspaper The Guardian crowdsourced the examination of hundreds of thousands of documents in 2009.[93]
Data donation
[edit]Data donation is a crowdsourcing approach to gather digital data. It is used by researchers and organizations to gain access to data from online platforms, websites, search engines and apps and devices. Data donation projects usually rely on participants volunteering their authentic digital profile information. Examples include:
- DataSkop developed by Algorithm Watch, a non-profit research organization in Germany, which accessed data on social media algorithms and automated decision-making systems.[94][95]
- Mozilla Rally, from the Mozilla Foundation, is a browser extension for adult participants in the US[96] to provide access to their data for research projects.[97]
- The Australian Search Experience and Ad Observatory projects set up in 2021 by researchers at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in Australia was using data donations to analyze how Google personalized search results, and examine how Facebook's algorithmic advertising model worked.[98][99]
- The Citizen Browser Project, developed by The Markup, was designed to measure how disinformation traveled across social media platforms over time.[100]
- Large Emergency Event Digital Information Repository was an effort to create a repository for images and videos from natural disasters, terrorist, and criminal events
In Social Media
[edit]Crowdsourcing is used in large scale media, such as the community notes system of the X platform. Crowdsourcing on such platforms is thought to be effective in combating partisan misinformation on social media when certain conditions are met.[101][102] Success may depend on trust in fact-checking sources, the ability to present information that challenges previous beliefs without causing excessive dissonance, and having a sufficiently large and diverse crowd of participants. Effective crowdsourcing interventions must navigate politically polarized environments where trusted sources may be less inclined to provide dissonant opinions. By leveraging network analysis to connect users with neighboring communities outside their ideological echo chambers, crowdsourcing can provide an additional layer of content moderation.
In public policy
[edit]Crowdsourcing public policy and the production of public services is also referred to as citizen sourcing. While some scholars argue crowdsourcing for this purpose as a policy tool[103] or a definite means of co-production,[104] others question that and argue that crowdsourcing should be considered just as a technological enabler that simply increases speed and ease of participation.[105] Crowdsourcing can also play a role in democratization.[106]
The first conference focusing on Crowdsourcing for Politics and Policy took place at Oxford University, under the auspices of the Oxford Internet Institute in 2014. Research has emerged since 2012[107] which focused on the use of crowdsourcing for policy purposes.[108][109] These include experimentally investigating the use of Virtual Labor Markets for policy assessment,[110] and assessing the potential for citizen involvement in process innovation for public administration.[111]
Governments across the world are increasingly using crowdsourcing for knowledge discovery and civic engagement.[citation needed] Iceland crowdsourced their constitution reform process in 2011, and Finland has crowdsourced several law reform processes to address their off-road traffic laws. The Finnish government allowed citizens to go on an online forum to discuss problems and possible resolutions regarding some off-road traffic laws.[citation needed] The crowdsourced information and resolutions would then be passed on to legislators to refer to when making a decision, allowing citizens to contribute to public policy in a more direct manner.[112][113] Palo Alto crowdsources feedback for its Comprehensive City Plan update in a process started in 2015.[114] The House of Representatives in Brazil has used crowdsourcing in policy-reforms.[115]
NASA used crowdsourcing to analyze large sets of images. As part of the Open Government Initiative of the Obama Administration, the General Services Administration collected and amalgamated suggestions for improving federal websites.[115]
For part of the Obama and Trump Administrations, the We the People system collected signatures on petitions, which were entitled to an official response from the White House once a certain number had been reached. Several U.S. federal agencies ran inducement prize contests, including NASA and the Environmental Protection Agency.[116][115]
Language-related data
[edit]Crowdsourcing has been used extensively for gathering language-related data.
For dictionary work, crowdsourcing was applied over a hundred years ago by the Oxford English Dictionary editors using paper and postage. It has also been used for collecting examples of proverbs on a specific topic (e.g. religious pluralism) for a printed journal.[117] Crowdsourcing language-related data online has proven very effective and many dictionary compilation projects used crowdsourcing. It is used particularly for specialist topics and languages that are not well documented, such as for the Oromo language.[118] Software programs have been developed for crowdsourced dictionaries, such as WeSay.[119] A slightly different form of crowdsourcing for language data was the online creation of scientific and mathematical terminology for American Sign Language.[120]
In linguistics, crowdsourcing strategies have been applied to estimate word knowledge, vocabulary size, and word origin.[121] Implicit crowdsourcing on social media has also approximating sociolinguistic data efficiently. Reddit conversations in various location-based subreddits were analyzed for the presence of grammatical forms unique to a regional dialect. These were then used to map the extent of the speaker population. The results could roughly approximate large-scale surveys on the subject without engaging in field interviews.[122]
Mining publicly available social media conversations can be used as a form of implicit crowdsourcing to approximate the geographic extent of speaker dialects.[122] Proverb collection is also being done via crowdsourcing on the Web, most notably for the Pashto language of Afghanistan and Pakistan.[123][124][125] Crowdsourcing has been extensively used to collect high-quality gold standards for creating automatic systems in natural language processing (e.g. named entity recognition, entity linking).[126]
In product design
[edit]Organizations often leverage crowdsourcing to gather ideas for new products as well as for the refinement of established product.[43] Lego allows users to work on new product designs while conducting requirements testing. Any user can provide a design for a product, and other users can vote on the product. Once the submitted product has received 10,000 votes, it will be formally reviewed in stages and go into production with no impediments such as legal flaws identified. The creator receives royalties from the net income.[127] Labelling new products as "customer-ideated" through crowdsourcing initiatives, as opposed to not specifying the source of design, leads to a substantial increase in the actual market performance of the products. Merely highlighting the source of design to customers, particularly, attributing the product to crowdsourcing efforts from user communities, can lead to a significant boost in product sales. Consumers perceive "customer-ideated" products as more effective in addressing their needs, leading to a quality inference. The design mode associated with crowdsourced ideas is considered superior in generating promising new products, contributing to the observed increase in market performance.[128]
In business
[edit]Crowdsourcing is widely used by businesses to source feedback and suggestions on how to improve their products and services.[43] Homeowners can use Airbnb to list their accommodation or unused rooms. Owners set their own nightly, weekly and monthly rates and accommodations. The business, in turn, charges guests and hosts a fee. Guests usually end up spending between $9 and $15.[129] They have to pay a booking fee every time they book a room. The landlord, in turn, pays a service fee for the amount due. The company has 1,500 properties in 34,000 cities in more than 190 countries.[citation needed]
In market research
[edit]Crowdsourcing is frequently used in market research as a way to gather insights and opinions from a large number of consumers.[130] Companies may create online surveys or focus groups that are open to the general public, allowing them to gather a diverse range of perspectives on their products or services. This can be especially useful for companies seeking to understand the needs and preferences of a particular market segment or to gather feedback on the effectiveness of their marketing efforts. The use of crowdsourcing in market research allows companies to quickly and efficiently gather a large amount of data and insights that can inform their business decisions.[131]
Other examples
[edit]- Geography — Volunteered geographic information (VGI) is geographic information generated through crowdsourcing, as opposed to traditional methods of Professional Geographic Information (PGI).[132] In describing the built environment, VGI has many advantages over PGI, primarily perceived currency,[133] accuracy[134] and authority.[135] OpenStreetMap is an example of crowdsourced mapping project.[38][37]
- Engineering — Many companies are introducing crowdsourcing to grow their engineering capabilities and find solutions to unsolved technical challenges and the need to adopt newest technologies such as 3D printing and the IOT.[citation needed]
- Libraries, museums and archives — Newspaper text correction at the National Library of Australia was an early, influential example of work with text transcriptions for crowdsourcing in cultural heritage institutions.[136] The Steve Museum project provided a prototype for categorizing artworks.[137] Crowdsourcing is used in libraries for OCR corrections on digitized texts, for tagging and for funding, especially in the absence of financial and human means. Volunteers can contribute explicitly with conscious effort or implicitly without being known by turning the text on the raw newspaper image into human corrected digital form.[138]
- Agriculture — Crowdsource research also applies to the field of agriculture. Crowdsourcing can be used to help farmers and experts to dentify different types of weeds[139] from the fields and also to provide assistance in removing the weeds.
- Cheating in bridge — Boye Brogeland initiated a crowdsourcing investigation of cheating by top-level bridge players that showed several players as guilty, which led to their suspension.[140]
- Open-source software and Crowdsourcing software development have been used extensively in the domain of software development.
- Healthcare — Research has emerged that outlined the use of crowdsourcing techniques in the public health domain.[141][142][143] The collective intelligence outcomes from crowdsourcing are being generated in three broad categories of public health care: health promotion,[142] health research,[144] and health maintenance.[145] Crowdsourcing also enables researchers to move from small homogeneous groups of participants to large heterogenous groups[146] beyond convenience samples such as students or higher educated people. The SESH group focuses on using crowdsourcing to improve health.
Methods
[edit]Internet and digital technologies have massively expanded the opportunities for crowdsourcing. However, the effect of user communication and platform presentation can have a major bearing on the success of an online crowdsourcing project.[19] The crowdsourced problem can range from huge tasks (such as finding alien life or mapping earthquake zones) or very small (identifying images). Some examples of successful crowdsourcing themes are problems that bug people, things that make people feel good about themselves, projects that tap into niche knowledge of proud experts, and subjects that people find sympathetic.[147]
Crowdsourcing can either take an explicit or an implicit route:
- Explicit crowdsourcing lets users work together to evaluate, share, and build different specific tasks, while implicit crowdsourcing means that users solve a problem as a side effect of something else they are doing. With explicit crowdsourcing, users can evaluate particular items like books or webpages, or share by posting products or items. Users can also build artifacts by providing information and editing other people's work.[citation needed]
- Implicit crowdsourcing can take two forms: standalone and piggyback. Standalone allows people to solve problems as a side effect of the task they are actually doing, whereas piggyback takes users' information from a third-party website to gather information.[148] This is also known as data donation.
In his 2013 book, Crowdsourcing, Daren C. Brabham puts forth a problem-based typology of crowdsourcing approaches:[149]
- Knowledge discovery and management is used for information management problems where an organization mobilizes a crowd to find and assemble information. It is ideal for creating collective resources.
- Distributed human intelligence tasking (HIT) is used for information management problems where an organization has a set of information in hand and mobilizes a crowd to process or analyze the information. It is ideal for processing large data sets that computers cannot easily do. Amazon Mechanical Turk uses this approach.
- Broadcast search is used for ideation problems where an organization mobilizes a crowd to come up with a solution to a problem that has an objective, provable right answer. It is ideal for scientific problem-solving.
- Peer-vetted creative production is used for ideation problems, where an organization mobilizes a crowd to come up with a solution to a problem which has an answer that is subjective or dependent on public support. It is ideal for design, aesthetic, or policy problems.
Ivo Blohm identifies four types of Crowdsourcing Platforms: Microtasking, Information Pooling, Broadcast Search, and Open Collaboration. They differ in the diversity and aggregation of contributions that are created. The diversity of information collected can either be homogenous or heterogenous. The aggregation of information can either be selective or integrative.[definition needed][150] Some common categories of crowdsourcing have been used effectively in the commercial world include crowdvoting, crowdsolving, crowdfunding, microwork, creative crowdsourcing, crowdsource workforce management, and inducement prize contests.[151]
In their conceptual review of the crowdsourcing, Linus Dahlander, Lars Bo Jeppesen, and Henning Piezunka distinguish four steps in the crowdsourcing process: Define, Broadcast, Attract, and Select.[152]
Crowdvoting
[edit]Crowdvoting occurs when a website gathers a large group's opinions and judgments on a certain topic. Some crowdsourcing tools and platforms allow participants to rank each other's contributions, e.g. in answer to the question "What is one thing we can do to make Acme a great company?" One common method for ranking is "like" counting, where the contribution with the most "like" votes ranks first. This method is simple and easy to understand, but it privileges early contributions, which have more time to accumulate votes.[citation needed] In recent years, several crowdsourcing companies have begun to use pairwise comparisons backed by ranking algorithms. Ranking algorithms do not penalize late contributions.[citation needed] They also produce results quicker. Ranking algorithms have proven to be at least 10 times faster than manual stack ranking.[153] One drawback, however, is that ranking algorithms are more difficult to understand than vote counting.
The Iowa Electronic Market is a prediction market that gathers crowds' views on politics and tries to ensure accuracy by having participants pay money to buy and sell contracts based on political outcomes.[154] Some of the most famous examples have made use of social media channels: Domino's Pizza, Coca-Cola, Heineken, and Sam Adams have crowdsourced a new pizza, bottle design, beer, and song respectively.[155] A website called Threadless selected the T-shirts it sold by having users provide designs and vote on the ones they like, which are then printed and available for purchase.[18]
The California Report Card (CRC), a program jointly launched in January 2014 by the Center for Information Technology Research in the Interest of Society[156] and Lt. Governor Gavin Newsom, is an example of modern-day crowd voting. Participants access the CRC online and vote on six timely issues. Through principal component analysis, the users are then placed into an online "café" in which they can present their own political opinions and grade the suggestions of other participants. This system aims to effectively involve the greater public in relevant political discussions and highlight the specific topics with which people are most concerned.
Crowdvoting's value in the movie industry was shown when in 2009 a crowd accurately predicted the success or failure of a movie based on its trailer,[157][158] a feat that was replicated in 2013 by Google.[159]
On Reddit, users collectively rate web content, discussions and comments as well as questions posed to persons of interest in "AMA" and AskScience online interviews.[cleanup needed]
In 2017, Project Fanchise purchased a team in the Indoor Football League and created the Salt Lake Screaming Eagles, a fan run team. Using a mobile app, the fans voted on the day-to-day operations of the team, the mascot name, signing of players and even offensive play calling during games.[160]
Crowdfunding
[edit]Crowdfunding is the process of funding projects by a multitude of people contributing a small amount to attain a certain monetary goal, typically via the Internet.[161] Crowdfunding has been used for both commercial and charitable purposes.[162] The crowdfuding model that has been around the longest is rewards-based crowdfunding. This model is where people can prepurchase products, buy experiences, or simply donate. While this funding may in some cases go towards helping a business, funders are not allowed to invest and become shareholders via rewards-based crowdfunding.[163]
Individuals, businesses, and entrepreneurs can showcase their businesses and projects by creating a profile, which typically includes a short video introducing their project, a list of rewards per donation, and illustrations through images.[citation needed] Funders make monetary contribution for numerous reasons:
- They connect to the greater purpose of the campaign, such as being a part of an entrepreneurial community and supporting an innovative idea or product.[164]
- They connect to a physical aspect of the campaign like rewards and gains from investment.[164]
- They connect to the creative display of the campaign's presentation.
- They want to see new products before the public.[164]
The dilemma for equity crowdfunding in the US as of 2012 was during a refinement process for the regulations of the Securities and Exchange Commission, which had until 1 January 2013 to tweak the fundraising methods. The regulators were overwhelmed trying to regulate Dodd-Frank and all the other rules and regulations involving public companies and the way they traded. Advocates of regulation claimed that crowdfunding would open up the flood gates for fraud, called it the "wild west" of fundraising, and compared it to the 1980s days of penny stock "cold-call cowboys". The process allowed for up to $1 million to be raised without some of the regulations being involved. Companies under the then-current proposal would have exemptions available and be able to raise capital from a larger pool of persons, which can include lower thresholds for investor criteria, whereas the old rules required that the person be an "accredited" investor. These people are often recruited from social networks, where the funds can be acquired from an equity purchase, loan, donation, or ordering. The amounts collected have become quite high, with requests that are over a million dollars for software such as Trampoline Systems, which used it to finance the commercialization of their new software.[citation needed]
Inducement prize contests
[edit]Web-based idea competitions or inducement prize contests often consist of generic ideas, cash prizes, and an Internet-based platform to facilitate easy idea generation and discussion. An example of these competitions includes an event like IBM's 2006 "Innovation Jam", attended by over 140,000 international participants and yielded around 46,000 ideas.[165][166] Another example is the Netflix Prize in 2009. People were asked to come up with a recommendation algorithm that is more accurate than Netflix's current algorithm. It had a grand prize of US$1,000,000, and it was given to a team which designed an algorithm that beat Netflix's own algorithm for predicting ratings by 10.06%.[citation needed]
Another example of competition-based crowdsourcing is the 2009 DARPA balloon experiment, where DARPA placed 10 balloon markers across the United States and challenged teams to compete to be the first to report the location of all the balloons. A collaboration of efforts was required to complete the challenge quickly and in addition to the competitive motivation of the contest as a whole, the winning team (MIT, in less than nine hours) established its own "collaborapetitive" environment to generate participation in their team.[167] A similar challenge was the Tag Challenge, funded by the US State Department, which required locating and photographing individuals in five cities in the US and Europe within 12 hours based only on a single photograph. The winning team managed to locate three suspects by mobilizing volunteers worldwide using a similar incentive scheme to the one used in the balloon challenge.[168]
Using open innovation platforms is an effective way to crowdsource people's thoughts and ideas for research and development. The company InnoCentive is a crowdsourcing platform for corporate research and development where difficult scientific problems are posted for crowds of solvers to discover the answer and win a cash prize that ranges from $10,000 to $100,000 per challenge.[18] InnoCentive, of Waltham, Massachusetts, and London, England, provides access to millions of scientific and technical experts from around the world. The company claims a success rate of 50% in providing successful solutions to previously unsolved scientific and technical problems. The X Prize Foundation creates and runs incentive competitions offering between $1 million and $30 million for solving challenges. Local Motors is another example of crowdsourcing, and it is a community of 20,000 automotive engineers, designers, and enthusiasts that compete to build off-road rally trucks.[169]
Implicit crowdsourcing
[edit]Implicit crowdsourcing is less obvious because users do not necessarily know they are contributing, yet can still be very effective in completing certain tasks.[citation needed] Rather than users actively participating in solving a problem or providing information, implicit crowdsourcing involves users doing another task entirely where a third party gains information for another topic based on the user's actions.[18]
A good example of implicit crowdsourcing is the ESP game, where users find words to describe Google images, which are then used as metadata for the images. Another popular use of implicit crowdsourcing is through reCAPTCHA, which asks people to solve CAPTCHAs to prove they are human, and then provides CAPTCHAs from old books that cannot be deciphered by computers, to digitize them for the web. Like many tasks solved using the Mechanical Turk, CAPTCHAs are simple for humans, but often very difficult for computers.[148]
Piggyback crowdsourcing can be seen most frequently by websites such as Google that data-mine a user's search history and websites to discover keywords for ads, spelling corrections, and finding synonyms. In this way, users are unintentionally helping to modify existing systems, such as Google Ads.[58]
Other types
[edit]- Creative crowdsourcing involves sourcing people for creative projects such as graphic design, crowdsourcing architecture, product design,[12] apparel design, movies,[170] writing, company naming,[171] illustration, etc.[172][173] While crowdsourcing competitions have been used for decades in some creative fields such as architecture, creative crowdsourcing has proliferated with the recent development of web-based platforms where clients can solicit a wide variety of creative work at lower cost than by traditional means.[citation needed]
- Crowdshipping (crowd-shipping) is a peer-to-peer shipping service, usually conducted via an online platform or marketplace.[174] There are several methods that have been categorized as crowd-shipping:
- Travelers heading in the direction of the buyer, and are willing to bring the package as part of their luggage for a reward.[175]
- Truck drivers whose route lies along the buyer's location and who are willing to take extra items in their truck.[176]
- Community-based platforms that connect international buyers and local forwarders, by allowing buyers to use forwarder's address as purchase destination, after which forwarders ship items further to the buyer.[177]
- Crowdsolving is a collaborative and holistic way of solving a problem through many people, communities, groups, or resources. It is a type of crowdsourcing with focus on complex and intellectually demanding problems requiring considerable effort, and the quality or uniqueness of contribution.[178]
- Problem–idea chains are a form of idea crowdsourcing and crowdsolving, where individuals are asked to submit ideas to solve problems and then problems that can be solved with those ideas. The aim is to find encourage individuals to find practical solutions to problems that are well thought through.[179]
- Macrowork tasks typically have these characteristics: they can be done independently, they take a fixed amount of time, and they require special skills. Macro-tasks could be part of specialized projects or could be part of a large, visible project where workers pitch in wherever they have the required skills. The key distinguishing factors are that macro-work requires specialized skills and typically takes longer, while microwork requires no specialized skills.
- Microwork is a crowdsourcing platform that allows users to do small tasks for which computers lack aptitude in for low amounts of money. Amazon's Mechanical Turk has created many different projects for users to participate in, where each task requires very little time and offers a very small amount in payment.[15] When choosing tasks, since only certain users "win", users learn to submit later and pick less popular tasks to increase the likelihood of getting their work chosen.[180] An example of a Mechanical Turk project is when users searched satellite images for a boat to find Jim Gray, a missing computer scientist.[148]
- Mobile crowdsourcing involves activities that take place on smartphones or mobile platforms that are frequently characterized by GPS technology.[181] This allows for real-time data gathering and gives projects greater reach and accessibility. However, mobile crowdsourcing can lead to an urban bias, and can have safety and privacy concerns.[182][183][184]
- Simple projects are those that require a large amount of time and skills compared to micro and macro-work. While an example of macro-work would be writing survey feedback, simple projects rather include activities like writing a basic line of code or programming a database, which both require a larger time commitment and skill level. These projects are usually not found on sites like Amazon Mechanical Turk, and are rather posted on platforms like Upwork that call for a specific expertise.[185]
- Complex projects generally take the most time, have higher stakes, and call for people with very specific skills. These are generally "one-off" projects that are difficult to accomplish and can include projects such as designing a new product that a company hopes to patent. Such projects are considered to be complex because design is a meticulous process that requires a large amount of time to perfect, and people completing the project must have specialized training in design to effectively complete the project. These projects usually pay the highest, yet are rarely offered.[186]
- Crowdsourcing-Based Optimization refers to a class of methods that utilize crowdsourcing to enable a group of workers to collaboratively collect data and solve optimization problems related to the data. Due to the heterogeneity of workers, the data collected varies, and the workers' understanding of the optimization problems also differs, thus posing challenges to collaboratively solve a global optimization problem. Representative methods for solving Crowdsourcing-Based Optimization include CrowdEC, which is a mechanism that dispatch the optimization tasks to a group of workers that collaborate to perform evolutionary computation (EC) in a distributed manner.[187]
Demographics of the crowd
[edit]The crowd is an umbrella term for the people who contribute to crowdsourcing efforts. Though it is sometimes difficult to gather data about the demographics of the crowd as a whole, several studies have examined various specific online platforms. Amazon Mechanical Turk has received a great deal of attention in particular. A study in 2008 by Ipeirotis found that users at that time were primarily American, young, female, and well-educated, with 40% earning more than $40,000 per year. In November 2009, Ross found a very different Mechanical Turk population where 36% of which was Indian. Two-thirds of Indian workers were male, and 66% had at least a bachelor's degree. Two-thirds had annual incomes less than $10,000, with 27% sometimes or always depending on income from Mechanical Turk to make ends meet.[188] More recent studies have found that U.S. Mechanical Turk workers are approximately 58% female, and nearly 67% of workers are in their 20s and 30s.[59][189][190][191] Close to 80% are White, and 9% are Black. MTurk workers are less likely to be married or have children as compared to the general population. In the US population over 18, 45% are unmarried, while the proportion of unmarried workers on MTurk is around 57%. Additionally, about 55% of MTurk workers do not have any children, which is significantly higher than the general population. Approximately 68% of U.S. workers are employed, compared to 60% in the general population. MTurk workers in the U.S. are also more likely to have a four-year college degree (35%) compared to the general population (27%). Politics within the U.S. sample of MTurk are skewed liberal, with 46% Democrats, 28% Republicans, and 26% "other". MTurk workers are also less religious than the U.S. population, with 41% religious, 20% spiritual, 21% agnostic, and 16% atheist.
The demographics of Microworkers.com differ from Mechanical Turk in that the US and India together accounting for only 25% of workers; 197 countries are represented among users, with Indonesia (18%) and Bangladesh (17%) contributing the largest share. However, 28% of employers are from the US.[192]
Another study of the demographics of the crowd at iStockphoto found a crowd that was largely white, middle- to upper-class, higher educated, worked in a so-called "white-collar job" and had a high-speed Internet connection at home.[193] In a crowd-sourcing diary study of 30 days in Europe, the participants were predominantly higher educated women.[146]
Studies have also found that crowds are not simply collections of amateurs or hobbyists. Rather, crowds are often professionally trained in a discipline relevant to a given crowdsourcing task and sometimes hold advanced degrees and many years of experience in the profession.[193][194][195][196] Claiming that crowds are amateurs, rather than professionals, is both factually untrue and may lead to marginalization of crowd labor rights.[197]
Gregory Saxton et al. studied the role of community users, among other elements, during his content analysis of 103 crowdsourcing organizations. They developed a taxonomy of nine crowdsourcing models (intermediary model, citizen media production, collaborative software development, digital goods sales, product design, peer-to-peer social financing, consumer report model, knowledge base building model, and collaborative science project model) in which to categorize the roles of community users, such as researcher, engineer, programmer, journalist, graphic designer, etc., and the products and services developed.[198]
Motivations
[edit]Contributors
[edit]Many researchers suggest that both intrinsic and extrinsic motivations cause people to contribute to crowdsourced tasks and these factors influence different types of contributors.[113][193][194][196][199][200][201][202][203] For example, people employed in a full-time position rate human capital advancement as less important than part-time workers do, while women rate social contact as more important than men do.[200]
Intrinsic motivations are broken down into two categories: enjoyment-based and community-based motivations. Enjoyment-based motivations refer to motivations related to the fun and enjoyment contributors experience through their participation. These motivations include: skill variety, task identity, task autonomy, direct feedback from the job, and taking the job as a pastime.[citation needed] Community-based motivations refer to motivations related to community participation, and include community identification and social contact. In crowdsourced journalism, the motivation factors are intrinsic: the crowd is driven by a possibility to make social impact, contribute to social change, and help their peers.[199]
Extrinsic motivations are broken down into three categories: immediate payoffs, delayed payoffs, and social motivations. Immediate payoffs, through monetary payment, are the immediately received compensations given to those who complete tasks. Delayed payoffs are benefits that can be used to generate future advantages, such as training skills and being noticed by potential employers. Social motivations are the rewards of behaving pro-socially,[204] such as the altruistic motivations of online volunteers. Chandler and Kapelner found that US users of the Amazon Mechanical Turk were more likely to complete a task when told they were going to help researchers identify tumor cells, than when they were not told the purpose of their task. However, of those who completed the task, quality of output did not depend on the framing.[205]
Motivation in crowdsourcing is often a mix of intrinsic and extrinsic factors.[206] In a crowdsourced law-making project, the crowd was motivated by both intrinsic and extrinsic factors. Intrinsic motivations included fulfilling civic duty, affecting the law for sociotropic reasons, to deliberate with and learn from peers. Extrinsic motivations included changing the law for financial gain or other benefits. Participation in crowdsourced policy-making was an act of grassroots advocacy, whether to pursue one's own interest or more altruistic goals, such as protecting nature.[113] Participants in online research studies report their motivation as both intrinsic enjoyment and monetary gain.[207][208][190]
Another form of social motivation is prestige or status. The International Children's Digital Library recruited volunteers to translate and review books. Because all translators receive public acknowledgment for their contributions, Kaufman and Schulz cite this as a reputation-based strategy to motivate individuals who want to be associated with institutions that have prestige. The Mechanical Turk uses reputation as a motivator in a different sense, as a form of quality control. Crowdworkers who frequently complete tasks in ways judged to be inadequate can be denied access to future tasks, whereas workers who pay close attention may be rewarded by gaining access to higher-paying tasks or being on an "Approved List" of workers. This system may incentivize higher-quality work.[209] However, this system only works when requesters reject bad work, which many do not.[210]
Despite the potential global reach of IT applications online, recent research illustrates that differences in location[which?] affect participation outcomes in IT-mediated crowds.[211]
Limitations and controversies
[edit]While there it lots of anecdotal evidence that illustrates the potential of crowdsourcing and the benefits that organizations have derived, there is scientific evidence that crowdsourcing initiatives often fail.[212] At least six major topics cover the limitations and controversies about crowdsourcing:
- Failure to attract contributions
- Impact of crowdsourcing on product quality
- Entrepreneurs contribute less capital themselves
- Increased number of funded ideas
- The value and impact of the work received from the crowd
- The ethical implications of low wages paid to workers
- Trustworthiness and informed decision making
Failure to attract contributions
[edit]Crowdsourcing initiatives often fail to attract sufficient or beneficial contributions. The vast majority of crowdsourcing initiatives hardly attract contributions; an analysis of thousands of organizations' crowdsourcing initiatives illustrates that only the 90th percentile of initiatives attracts more than one contribution a month.[203] While crowdsourcing initiatives may be effective in isolation, when faced with competition they mail fail to attract sufficient contributions. Nagaraj and Piezunka (2024) illustrate that OpenStreetMap struggled to attract contributions once Google Maps entered a country.
Impact of crowdsourcing on product quality
[edit]Crowdsourcing allows anyone to participate, allowing for many unqualified participants and resulting in large quantities of unusable contributions.[213] Companies, or additional crowdworkers, then have to sort through the low-quality contributions. The task of sorting through crowdworkers' contributions, along with the necessary job of managing the crowd, requires companies to hire actual employees, thereby increasing management overhead.[214] For example, susceptibility to faulty results can be caused by targeted, malicious work efforts. Since crowdworkers completing microtasks are paid per task, a financial incentive often causes workers to complete tasks quickly rather than well.[59] Verifying responses is time-consuming, so employers often depend on having multiple workers complete the same task to correct errors. However, having each task completed multiple times increases time and monetary costs.[215] Some companies, like CloudResearch, control data quality by repeatedly vetting crowdworkers to ensure they are paying attention and providing high-quality work.[210]
Crowdsourcing quality is also impacted by task design. Lukyanenko et al.[216] argue that, the prevailing practice of modeling crowdsourcing data collection tasks in terms of fixed classes (options), unnecessarily restricts quality. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level (which is typically less useful to sponsor organizations, hence less common).[clarification needed] Further, greater overall accuracy is expected when participants could provide free-form data compared to tasks in which they select from constrained choices. In behavioral science research, it is often recommended to include open-ended responses, in addition to other forms of attention checks, to assess data quality.[217][218]
Just as limiting, oftentimes there is not enough skills or expertise in the crowd to successfully accomplish the desired task. While this scenario does not affect "simple" tasks such as image labeling, it is particularly problematic for more complex tasks, such as engineering design or product validation. A comparison between the evaluation of business models from experts and an anonymous online crowd showed that an anonymous online crowd cannot evaluate business models to the same level as experts.[219] In these cases, it may be difficult or even impossible to find qualified people in the crowd, as their responses represent only a small fraction of the workers compared to consistent, but incorrect crowd members.[220] However, if the task is "intermediate" in its difficulty, estimating crowdworkers' skills and intentions and leveraging them for inferring true responses works well,[221] albeit with an additional computation cost.[citation needed]
Crowdworkers are a nonrandom sample of the population. Many researchers use crowdsourcing to quickly and cheaply conduct studies with larger sample sizes than would be otherwise achievable. However, due to limited access to the Internet, participation in low developed countries is relatively low. Participation in highly developed countries is similarly low, largely because the low amount of pay is not a strong motivation for most users in these countries. These factors lead to a bias in the population pool towards users in medium developed countries, as deemed by the human development index.[222] Participants in these countries sometimes masquerade as U.S. participants to gain access to certain tasks. This led to the "bot scare" on Amazon Mechanical Turk in 2018, when researchers thought bots were completing research surveys due to the lower quality of responses originating from medium-developed countries.[218][223]
The likelihood that a crowdsourced project will fail due to lack of monetary motivation or too few participants increases over the course of the project. Tasks that are not completed quickly may be forgotten, buried by filters and search procedures. This results in a long-tail power law distribution of completion times.[224] Additionally, low-paying research studies online have higher rates of attrition, with participants not completing the study once started.[60] Even when tasks are completed, crowdsourcing does not always produce quality results. When Facebook began its localization program in 2008, it encountered some criticism for the low quality of its crowdsourced translations.[225] One of the problems of crowdsourcing products is the lack of interaction between the crowd and the client. Usually little information is known about the final product, and workers rarely interacts with the final client in the process. This can decrease the quality of product as client interaction is considered to be a vital part of the design process.[226]
An additional cause of the decrease in product quality that can result from crowdsourcing is the lack of collaboration tools. In a typical workplace, coworkers are organized in such a way that they can work together and build upon each other's knowledge and ideas. Furthermore, the company often provides employees with the necessary information, procedures, and tools to fulfill their responsibilities. However, in crowdsourcing, crowd-workers are left to depend on their own knowledge and means to complete tasks.[214]
A crowdsourced project is usually expected to be unbiased by incorporating a large population of participants with a diverse background. However, most of the crowdsourcing works are done by people who are paid or directly benefit from the outcome (e.g. most of open source projects working on Linux). In many other cases, the end product is the outcome of a single person's endeavor, who creates the majority of the product, while the crowd only participates in minor details.[227]
Entrepreneurs contribute less capital themselves
[edit]To make an idea turn into a reality, the first component needed is capital. Depending on the scope and complexity of the crowdsourced project, the amount of necessary capital can range from a few thousand dollars to hundreds of thousands, if not more. The capital-raising process can take from days to months depending on different variables, including the entrepreneur's network and the amount of initial self-generated capital.[citation needed]
The crowdsourcing process allows entrepreneurs to access a wide range of investors who can take different stakes in the project.[228] As an effect, crowdsourcing simplifies the capital-raising process and allows entrepreneurs to spend more time on the project itself and reaching milestones rather than dedicating time to get it started. Overall, the simplified access to capital can save time to start projects and potentially increase the efficiency of projects.[citation needed]
Others argue that easier access to capital through a large number of smaller investors can hurt the project and its creators. With a simplified capital-raising process involving more investors with smaller stakes, investors are more risk-seeking because they can take on an investment size with which they are comfortable.[228] This leads to entrepreneurs losing possible experience convincing investors who are wary of potential risks in investing because they do not depend on one single investor for the survival of their project. Instead of being forced to assess risks and convince large institutional investors on why their project can be successful, wary investors can be replaced by others who are willing to take on the risk.
Some translation companies and translation tool consumers pretend to use crowdsourcing as a means for drastically cutting costs, instead of hiring professional translators. This situation has been systematically denounced by IAPTI and other translator organizations.[229]
Increased number of funded ideas
[edit]The raw number of ideas that get funded and the quality of the ideas is a large controversy over the issue of crowdsourcing.
Proponents argue that crowdsourcing is beneficial because it allows the formation of startups with niche ideas that would not survive venture capitalist or angel funding, which are oftentimes the primary investors in startups. Many ideas are scrapped in their infancy due to insufficient support and lack of capital, but crowdsourcing allows these ideas to be started if an entrepreneur can find a community to take interest in the project.[230]
Crowdsourcing allows those who would benefit from the project to fund and become a part of it, which is one way for small niche ideas get started.[231] However, when the number of projects grows, the number of failures also increases. Crowdsourcing assists the development of niche and high-risk projects due to a perceived need from a select few who seek the product. With high risk and small target markets, the pool of crowdsourced projects faces a greater possible loss of capital, lower return, and lower levels of success.[232]
Labor-related concerns
[edit]Because crowdworkers are considered independent contractors rather than employees, they are not guaranteed minimum wage. In practice, workers using Amazon Mechanical Turk generally earn less than minimum wage. In 2009, it was reported that United States Turk users earned an average of $2.30 per hour for tasks, while users in India earned an average of $1.58 per hour, which is below minimum wage in the United States (but not in India).[188][233] In 2018, a survey of 2,676 Amazon Mechanical Turk workers doing 3.8 million tasks found that the median hourly wage was approximately $2 per hour, and only 4% of workers earned more than the federal minimum wage of $7.25 per hour.[234] Some researchers who have considered using Mechanical Turk to get participants for research studies have argued that the wage conditions might be unethical.[60][235] However, according to other research, workers on Amazon Mechanical Turk do not feel they are exploited and are ready to participate in crowdsourcing activities in the future.[236] A more recent study using stratified random sampling to access a representative sample of Mechanical Turk workers found that the U.S. MTurk population is financially similar to the general population.[190] Workers tend to participate in tasks as a form of paid leisure and to supplement their primary income, and only 7% view it as a full-time job. Overall, workers rated MTurk as less stressful than other jobs. Workers also earn more than previously reported, about $6.50 per hour. They see MTurk as part of the solution to their financial situation and report rare upsetting experiences. They also perceive requesters on MTurk as fairer and more honest than employers outside of the platform.[190]
When Facebook began its localization program in 2008, it received criticism for using free labor in crowdsourcing the translation of site guidelines.[225]
Typically, no written contracts, nondisclosure agreements, or employee agreements are made with crowdworkers. For users of the Amazon Mechanical Turk, this means that employers decide whether users' work is acceptable and reserve the right to withhold pay if it does not meet their standards.[237] Critics say that crowdsourcing arrangements exploit individuals in the crowd, and a call has been made for crowds to organize for their labor rights.[238][197][239]
Collaboration between crowd members can also be difficult or even discouraged, especially in the context of competitive crowd sourcing. Crowdsourcing site InnoCentive allows organizations to solicit solutions to scientific and technological problems; only 10.6% of respondents reported working in a team on their submission.[194] Amazon Mechanical Turk workers collaborated with academics to create a platform, WeAreDynamo.org, that allows them to organize and create campaigns to better their work situation, but the site is no longer running.[240] Another platform run by Amazon Mechanical Turk workers and academics, Turkopticon, continues to operate and provides worker reviews on Amazon Mechanical Turk employers.[241]
America Online settled the case Hallissey et al. v. America Online, Inc. for $15 million in 2009, after unpaid moderators sued to be paid the minimum wage as employees under the U.S. Fair Labor Standards Act.
Crowdsourcing has also been increasingly used in artificial intelligence training, where large datasets of human-labeled images or text are collected to improve machine learning models[242].
Other concerns
[edit]Besides insufficient compensation and other labor-related disputes, there have also been concerns regarding privacy violations, the hiring of vulnerable groups, breaches of anonymity, psychological damage, the encouragement of addictive behaviors, and more.[243] Many but not all of the issues related to crowdworkes overlap with concerns related to content moderators.
See also
[edit]- chronolog - a citizen science environmental monitoring platform.
- Citizen science – Amateur scientific research
- Clickworkers – Citizen science project by NASA
- Collaborative innovation network – Practice that uses internet platforms to promote communication within virtual teams
- Collaborative mapping – Aggregation of web mapping and user content
- Collective consciousness – Shared beliefs and ideas in society
- Collective intelligence – Group intelligence that emerges from collective efforts
- Collective problem solving – Process of achieving a goal by overcoming obstacles
- Commons-based peer production – Method of producing value
- Crowd computing – Work distributed across Internet to substitute computers
- Crowdcasting – Intersection of broadcasting and crowdsourcing
- Crowdfixing – Crowdsourcing fixing of local public spaces
- Crowdsourcing software development
- Distributed thinking – Computer science technique
- Distributed Proofreaders – Web-based proofreading project
- Flash mob – Form of sudden public performance
- Folksonomy
- Gamification – Using game design elements in non-games
- Government crowdsourcing
- List of crowdsourcing projects
- Models of collaborative tagging
- Microcredit – Small loans to impoverished borrowers
- Participatory democracy – Model of democracy
- Participatory monitoring
- Open knowledge – Practice of sharing knowledge publicly and reusably
- Smart mob – Digital-communication coordinated group
- Social collaboration
- Stone Soup – European folk story
- Teamwork – Collaborative effort of a team to achieve a common goal
- Truecaller – Swedish Mobile phone application
- Virtual collective consciousness
- Virtual volunteering – Online volunteering
- Wisdom of the crowd – Collective perception of a group of people
- Wiki survey – Survey method for crowdsourcing opinions
- Crowdsource (app) – Crowdsourcing platform developed by Google
References
[edit]- ^ Schenk, Eric; Guittard, Claude (1 January 2009). Crowdsourcing What can be Outsourced to the Crowd and Why. Center for Direct Scientific Communication. Retrieved 1 October 2018 – via HAL.
- ^ Hirth, Matthias; Hoßfeld, Tobias; Tran-Gia, Phuoc (2011). "Anatomy of a Crowdsourcing Platform – Using the Example of Microworkers.com" (PDF). 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing. pp. 322–329. doi:10.1109/IMIS.2011.89. ISBN 978-1-61284-733-7. S2CID 12955095. Archived from the original (PDF) on 22 November 2015. Retrieved 5 September 2015.
- ^ Estellés-Arolas, Enrique; González-Ladrón-de-Guevara, Fernando (2012), "Towards an Integrated Crowdsourcing Definition" (PDF), Journal of Information Science, 38 (2): 189–200, doi:10.1177/0165551512437638, hdl:10251/56904, S2CID 18535678, archived from the original (PDF) on 19 August 2019, retrieved 16 March 2012
- ^ Brabham, D. C. (2013). Crowdsourcing. Cambridge, Massachusetts; London, England: The MIT Press.
- ^ Brabham, D. C. (2008). "Crowdsourcing as a Model for Problem Solving an Introduction and Cases". Convergence: The International Journal of Research into New Media Technologies. 14 (1): 75–90. CiteSeerX 10.1.1.175.1623. doi:10.1177/1354856507084420. S2CID 145310730.
- ^ Prpić, J., & Shukla, P. (2016). Crowd Science: Measurements, Models, and Methods. In Proceedings of the 49th Annual Hawaii International Conference on System Sciences, Kauai, Hawaii: IEEE Computer Society. arXiv:1702.04221
- ^ Buettner, Ricardo (2015). A Systematic Literature Review of Crowdsourcing Research from a Human Resource Management Perspective. 48th Annual Hawaii International Conference on System Sciences. Kauai, Hawaii: IEEE. pp. 4609–4618. doi:10.13140/2.1.2061.1845. ISBN 978-1-4799-7367-5.
- ^ a b Prpić, John; Taeihagh, Araz; Melton, James (September 2015). "The Fundamentals of Policy Crowdsourcing". Policy & Internet. 7 (3): 340–361. arXiv:1802.04143. doi:10.1002/poi3.102. S2CID 3626608.
- ^ Afuah, A.; Tucci, C. L. (2012). "Crowdsourcing as a Solution to Distant Search" (PDF). Academy of Management Review. 37 (3): 355–375. doi:10.5465/amr.2010.0146.
- ^ de Vreede, T., Nguyen, C., de Vreede, G. J., Boughzala, I., Oh, O., & Reiter-Palmon, R. (2013). A Theoretical Model of User Engagement in Crowdsourcing. In Collaboration and Technology (pp. 94–109). Springer Berlin Heidelberg
- ^ Sarin, Supheakmungkol; Pipatsrisawat, Knot; Pham, Khiêm; Batra, Anurag; Valente, Luis (2019). "Crowdsource by Google: A Platform for Collecting Inclusive and Representative Machine Learning Data" (PDF). AAAI Hcomp 2019.
- ^ a b Liu, Wei; Moultrie, James; Ye, Songhe (4 May 2019). "The Customer-Dominated Innovation Process: Involving Customers as Designers and Decision-Makers in Developing New Product". The Design Journal. 22 (3): 299–324. doi:10.1080/14606925.2019.1592324. S2CID 145931864.
- ^ Schlagwein, Daniel; Bjørn-Andersen, Niels (2014), "Organizational Learning with Crowdsourcing: The Revelatory Case of Lego" (PDF), Journal of the Association for Information Systems, 15 (11): 754–778, doi:10.17705/1jais.00380, S2CID 14811856
- ^ Taeihagh, Araz (19 June 2017). "Crowdsourcing, Sharing Economies, and Development". Journal of Developing Societies. 33 (2): 0169796X1771007. arXiv:1707.06603. doi:10.1177/0169796x17710072. S2CID 32008949.
- ^ a b c Howe, Jeff (2006). "The Rise of Crowdsourcing". Wired.
- ^ "crowdsourcing (noun)". Oxford English Dictionary. 2023. Retrieved 3 January 2024.
- ^ "crowdsourcing (noun)". Merriam-Webster. 2024. Retrieved 3 January 2024.
- ^ a b c d e Brabham, Daren (2008), "Crowdsourcing as a Model for Problem Solving: An Introduction and Cases" (PDF), Convergence: The International Journal of Research into New Media Technologies, 14 (1): 75–90, CiteSeerX 10.1.1.175.1623, doi:10.1177/1354856507084420, S2CID 145310730, archived from the original (PDF) on 2 August 2012
- ^ a b Guth, Kristen L.; Brabham, Daren C. (4 August 2017). "Finding the diamond in the rough: Exploring communication and platform in crowdsourcing performance". Communication Monographs. 84 (4): 510–533. doi:10.1080/03637751.2017.1359748. S2CID 54045924.
- ^ a b Wei, Zhudeng; Fang, Xiuqi; Yin, Jun (October 2018). "Comparison of climatic impacts transmission from temperature to grain harvests and economies between the Han (206 BC–AD 220) and Tang (AD 618–907) dynasties". The Holocene. 28 (10): 1606. Bibcode:2018Holoc..28.1598W. doi:10.1177/0959683618782592. S2CID 134577720.
- ^ O'Connor, J. J.; Robertson, E. F. (February 1997). "Longitude and the Académie Royale". University of St. Andrews. Retrieved 20 January 2024.
- ^ a b c d e f g h "A Brief History of Crowdsourcing [Infographic]". Crowdsourcing.org. 18 March 2012. Archived from the original on 3 July 2015. Retrieved 2 July 2015.
- ^ Cattani, Gino; Ferriani, Simone; Lanza, Andrea (December 2017). "Deconstructing the Outsider Puzzle: The Legitimation Journey of Novelty". Organization Science. 28 (6): 965–992. doi:10.1287/orsc.2017.1161. ISSN 1047-7039.
- ^ Hern, Chester G.(2002). Tracks in the Sea, p. 123 & 246. McGraw Hill. ISBN 0-07-136826-4.
- ^ "Smithsonian Crowdsourcing Since 1849". Smithsonian Institution Archives. 14 April 2011. Retrieved 24 August 2018.
- ^ Clark, Catherine E. (25 April 1970). "'C'était Paris en 1970'". Études Photographiques (31). Retrieved 2 July 2015.
- ^ Axelrod R. (1980), "'Effective choice in the Prisoner's Dilemma'", Journal of Conflict Resolution, 24 (1): 3–25, doi:10.1177/002200278002400101, S2CID 143112198
- ^ "Why our mongrels are a dying breed". Sunday Telegraph. 3 March 2013. p. 21. Retrieved 3 July 2025.
- ^ O'Hara, Monica (29 October 1983). "Paperbacks". Liverpool Echo. p. 20. Retrieved 5 July 2025.
- ^ "SETI@home". University of California. Retrieved 20 January 2024.
- ^ Brabham, Daren C.; Ribisl, Kurt M.; Kirchner, Thomas R.; Bernhardt, Jay M. (1 February 2014). "Crowdsourcing Applications for Public Health". American Journal of Preventive Medicine. 46 (2): 179–187. doi:10.1016/j.amepre.2013.10.016. PMID 24439353. S2CID 205436420.
- ^ "UNV Online Volunteering Service | History". Onlinevolunteering.org. Archived from the original on 2 July 2015. Retrieved 2 July 2015.
- ^ "Wired 14.06: The Rise of Crowdsourcing". Archive.wired.com. 4 January 2009. Retrieved 2 July 2015.
- ^ Lih, Andrew (2009). The Wikipedia revolution: how a bunch of nobodies created the world's greatest encyclopedia (1st ed.). New York: Hyperion. ISBN 978-1-4013-0371-6.
- ^ Lakhani KR, Garvin DA, Lonstein E (January 2010). "TopCoder (A): Developing Software through Crowdsourcing". Harvard Business School Case: 610–032.
- ^ Phadnisi, Shilpa (21 October 2016). "Appirio's TopCoder too is a big catch for Wipro". The Times of India. Retrieved 30 April 2018.
- ^ a b Lardinois, F. (9 August 2014). "For The Love Of Open Mapping Data". Yahoo. Retrieved 20 January 2024.
- ^ a b Nagaraj, Abhishek; Piezunka, Henning (September 2024). "The Divergent Effect of Competition on Platforms: Deterring Recruits, Motivating Converts". Strategy Science. 9 (3): 277–296. doi:10.1287/stsc.2022.0125. ISSN 2333-2050.
- ^ a b "Crowdsourcing Back-Up Timeline Early Stories". Archived from the original on 29 November 2014.[better source needed]
- ^ "Amazon Mechanical Turk". www.mturk.com. Retrieved 25 November 2022.
- ^ Ohanian, A. (5 December 2006). "reddit on June23-05". Flickr. Retrieved 20 January 2024.
- ^ "Waze". Waze Mobile. 2009. Retrieved 20 January 2024.
- ^ a b c Piezunka, Henning; Dahlander, Linus (June 2015). "Distant Search, Narrow Attention: How Crowding Alters Organizations' Filtering of Suggestions in Crowdsourcing". Academy of Management Journal. 58 (3): 856–880. doi:10.5465/amj.2012.0458. ISSN 0001-4273.
- ^ Sengupta, S. (13 August 2013). "Potent Memories From a Divided India". New York Times. Retrieved 20 January 2024.
- ^ Garrigos-Simon, Fernando J.; Gil-Pechuán, Ignacio; Estelles-Miguel, Sofia (2015). Advances in Crowdsourcing. Springer. ISBN 978-3-319-18341-1.
- ^ "Antoine-Jean-Baptiste-Robert Auget, Baron de Montyon". New Advent. Retrieved 25 February 2012.
- ^ "It Was All About Alkali". Chemistry Chronicles. Retrieved 25 February 2012.
- ^ "Nicolas Appert". John Blamire. Retrieved 25 February 2012.
- ^ "9 Examples of Crowdsourcing, Before 'Crowdsourcing' Existed". MemeBurn. 15 September 2011. Retrieved 25 February 2012.
- ^ Pande, Shamni (25 May 2013). "The People Know Best". Business Today. India: Living Media India Limited.
- ^ Noveck, Beth Simone (2009), Wiki Government: How Technology Can Make Government Better, Democracy Stronger, and Citizens More Powerful, Brookings Institution Press
- ^ Sarasua, Cristina; Simperl, Elena; Noy, Natalya F. (2012), "Crowdsourcing Ontology Alignment with Microtasks" (PDF), Institute AIFB. Karlsruhe Institute of Technology: 2, archived from the original (PDF) on 5 March 2016, retrieved 18 September 2021
- ^ Hollow, Matthew (20 April 2013). "Crowdfunding and Civic Society in Europe: A Profitable Partnership?". Open Citizenship. Retrieved 29 April 2013.
- ^ Federal Transit Administration Public Transportation Participation Pilot Program, U.S. Department of Transportation, archived from the original on 7 January 2009
- ^ Peer-to-Patent Community Patent Review Project, Peer-to-Patent Community Patent Review Project
- ^ Callison-Burch, C.; Dredze, M. (2010), "Creating Speech and Language Data With Amazon's Mechanical Turk" (PDF), Human Language Technologies Conference: 1–12, archived from the original (PDF) on 2 August 2012, retrieved 28 February 2012
- ^ McGraw, I.; Seneff, S. (2011), "Growing a spoken language interface on Amazon Mechanical Turk", Interspeech 2011 (PDF), pp. 3057–3060, doi:10.21437/Interspeech.2011-765
- ^ a b Kittur, A.; Chi, E.H.; Sun, B. (2008), "Crowdsourcing user studies with Mechanical Turk" (PDF), Chi 2008
- ^ a b c d Litman, Leib; Robinson, Jonathan (2020). Conducting Online Research on Amazon Mechanical Turk and Beyond. SAGE Publications. ISBN 978-1-5063-9113-7.
- ^ a b c Mason, W.; Suri, S. (2010), "Conducting Behavioral Research on Amazon's Mechanical Turk", Behavior Research Methods, SSRN 1691163
- ^ Koblin, A. (2009). "The sheep market". Proceedings of the seventh ACM conference on Creativity and cognition. pp. 451–452. doi:10.1145/1640233.1640348. ISBN 978-1-60558-865-0. S2CID 20609292.
- ^ "explodingdog 2015". Explodingdog.com. Retrieved 2 July 2015.
- ^ DeVun, Leah (19 November 2009). "Looking at how crowds produce and present art". Wired News. Archived from the original on 24 October 2012. Retrieved 26 February 2012.
- ^ Linver, D. (2010), Crowdsourcing and the Evolving Relationship between Art and Artist, archived from the original on 14 July 2014, retrieved 28 February 2012
- ^ "Why". INRIX.com. 13 September 2014. Archived from the original on 12 October 2014. Retrieved 2 July 2015.
- ^ Wang, Cheng; Han, Larry; Stein, Gabriella; Day, Suzanne; Bien-Gund, Cedric; Mathews, Allison; Ong, Jason J.; Zhao, Pei-Zhen; Wei, Shu-Fang; Walker, Jennifer; Chou, Roger; Lee, Amy; Chen, Angela; Bayus, Barry; Tucker, Joseph D. (20 January 2020). "Crowdsourcing in health and medical research: a systematic review". Infectious Diseases of Poverty. 9 (1): 8. doi:10.1186/s40249-020-0622-9. ISSN 2049-9957. PMC 6971908. PMID 31959234.
- ^ Hildebrand, Mikaela; Ahumada, Claudia; Watson, Sharon (January 2013). "CrowdOutAIDS: crowdsourcing youth perspectives for action". Reproductive Health Matters. 21 (41): 57–68. doi:10.1016/S0968-8080(13)41687-7. ISSN 0968-8080. PMID 23684188. S2CID 31888826.
- ^ Feng, Steve; Woo, Min-jae; Kim, Hannah; Kim, Eunso; Ki, Sojung; Shao, Lei; Ozcan, Aydogan (11 March 2016). Levitz, David; Ozcan, Aydogan; Erickson, David (eds.). "A game-based crowdsourcing platform for rapidly training middle and high school students to perform biomedical image analysis". Optics and Biophotonics in Low-Resource Settings II. 9699. SPIE: 92–100. Bibcode:2016SPIE.9699E..0TF. doi:10.1117/12.2212310. S2CID 124343732.
- ^ a b Lee, Young Ji; Arida, Janet A.; Donovan, Heidi S. (November 2017). "The application of crowdsourcing approaches to cancer research: a systematic review". Cancer Medicine. 6 (11): 2595–2605. doi:10.1002/cam4.1165. ISSN 2045-7634. PMC 5673951. PMID 28960834.
- ^ Vergano, Dan (30 August 2014). "1833 Meteor Storm Started Citizen Science". National Geographic. StarStruck. Archived from the original on 16 September 2014. Retrieved 18 September 2014.
- ^ Littmann, Mark; Suomela, Todd (June 2014). "Crowdsourcing, the great meteor storm of 1833, and the founding of meteor science". Endeavour. 38 (2): 130–138. doi:10.1016/j.endeavour.2014.03.002. PMID 24917173.
- ^ "Gateway to Astronaut Photography of Earth". NASA.
- ^ McLaughlin, Elliot. "Image Overload: Help us sort it all out, NASA requests". CNN. Retrieved 18 September 2014.
- ^ Liu, Huiying; Xie, Qian Wen; Lou, Vivian W. Q. (1 April 2019). "Everyday social interactions and intra-individual variability in affect: A systematic review and meta-analysis of ecological momentary assessment studies". Motivation and Emotion. 43 (2): 339–353. doi:10.1007/s11031-018-9735-x. S2CID 254827087.
- ^ Luong, Raymond; Lomanowska, Anna M. (2021). "Evaluating Reddit as a Crowdsourcing Platform for Psychology Research Projects". Teaching of Psychology. 49 (4): 329–337. doi:10.1177/00986283211020739. S2CID 236414676.
- ^ Brown, Joshua K.; Hohman, Zachary P. (2022). "Extreme party animals: Effects of political identification and ideological extremity". Journal of Applied Social Psychology. 52 (5): 351–362. doi:10.1111/jasp.12863. S2CID 247077069.
- ^ Vaterlaus, J. Mitchell; Patten, Emily V.; Spruance, Lori A. (26 May 2022). "#Alonetogether:: An Exploratory Study of Social Media Use at the Beginning of the COVID-19 Pandemic". The Journal of Social Media in Society. 11 (1): 27–45.
- ^ Després, Jacques; Hadjsaid, Nouredine; Criqui, Patrick; Noirot, Isabelle (1 February 2015). "Modelling the impacts of variable renewable sources on the power sector: reconsidering the typology of energy modelling tools". Energy. 80: 486–495. Bibcode:2015Ene....80..486D. doi:10.1016/j.energy.2014.12.005.
- ^ "OpenEI — Energy Information, Data, and other Resources". OpenEI. Retrieved 26 September 2016.
- ^ Garvin, Peggy (12 December 2009). "New Gateway: Open Energy Info". SLA Government Information Division. Dayton, Ohio, USA. Retrieved 26 September 2016.[permanent dead link]
- ^ Brodt-Giles, Debbie (2012). WREF 2012: OpenEI — an open energy data and information exchange for international audiences (PDF). Golden, Colorado, USA: National Renewable Energy Laboratory (NREL). Archived from the original (PDF) on 9 October 2016. Retrieved 24 September 2016.
- ^ Davis, Chris; Chmieliauskas, Alfredas; Dijkema, Gerard; Nikolic, Igor. "Enipedia". Delft, The Netherlands: Energy and Industry group, Faculty of Technology, Policy and Management, TU Delft. Archived from the original on 10 June 2014. Retrieved 7 October 2016.
- ^ Davis, Chris (2012). Making sense of open data: from raw data to actionable insight — PhD thesis. Delft, The Netherlands: Delft University of Technology. Retrieved 2 October 2018.Chapter 9 discusses in depth the initial development of Enipedia.
- ^ "What Is the Four-Generation Program?". The Church of Jesus Christ of Latter-day Saints. Retrieved 30 January 2012.
- ^ King, Turi E.; Jobling, Mark A. (2009). "What's in a name? Y chromosomes, surnames and the genetic genealogy revolution". Trends in Genetics. 25 (8): 351–60. doi:10.1016/j.tig.2009.06.003. hdl:2381/8106. PMID 19665817.
The International Society of Genetic Genealogy advocates the use of genetics as a tool for genealogical research, and provides a support network for genetic genealogists. It hosts the ISOGG Y-haplogroup tree, which has the virtue of being regularly updated.
- ^ Mendex, etc. al., Fernando (28 February 2013). "An African American Paternal Lineage Adds an Extremely Ancient Root to the Human Y Chromosome Phylogenetic Tree". The American Journal of Human Genetics. 92 (3): 454–459. doi:10.1016/j.ajhg.2013.02.002. PMC 3591855. PMID 23453668.
- ^ Wells, Spencer (2013). "The Genographic Project and the Rise of Citizen Science". Southern California Genealogical Society (SCGS). Archived from the original on 10 July 2013. Retrieved 10 July 2013.
- ^ "History of the Christmas Bird Count | Audubon". Birds.audubon.org. 22 January 2015. Retrieved 2 July 2015.
- ^ "Thank you!". Audubon. 5 October 2017. Archived from the original on 24 August 2014.
- ^ "Home – ISCRAM2015 – University of Agder" (PDF). iscram2015.uia.no. Archived from the original (PDF) on 17 October 2016. Retrieved 14 October 2016.
- ^ Aitamurto, Tanja (2015). "Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change and Peer-Learning". International Journal of Communication. 9: 3523–3543.
- ^ Aitamurto, Tanja (2016). "Crowdsourcing as a Knowledge-Search Method in Digital Journalism: Ruptured Ideals and Blended Responsibility". Digital Journalism. 4 (2): 280–297. doi:10.1080/21670811.2015.1034807. S2CID 156243124.
- ^ Aitamurto, Tanja (2013). "Balancing between open and closed: co-creation in magazine journalism". Digital Journalism. 1 (2): 229–251. doi:10.1080/21670811.2012.750150. S2CID 62882093.
- ^ "Algorithm Watch". Algorithm Watch. 2022. Retrieved 18 May 2022.
- ^ "Overview in English". DataSkop. 2022. Retrieved 18 May 2022.
- ^ "FAQs". Mozilla Rally. Archived from the original on 14 March 2023. Retrieved 14 March 2023.
Mozilla Rally is currently available to US residents who are age 19 and older
- ^ "It's your data. Use it for change". Mozilla Rally. Retrieved 14 March 2023.
- ^ Angus, Daniel (16 February 2022). "A data economy: the case for doing and knowing more about algorithms". Crikey. Retrieved 24 March 2022.
- ^ Burgess, Jean; Angus, Daniel; Carah, Nicholas; Andrejevic, Mark; Hawker, Kiah; Lewis, Kelly; Obeid, Abdul; Smith, Adam; Tan, Jane; Fordyce, Robbie; Trott, Verity (8 November 2021). "Critical simulation as hybrid digital method for exploring the data operations and vernacular cultures of visual social media platforms". SocArXiv. doi:10.31235/osf.io/2cwsu. S2CID 243837581.
- ^ The Markup (2022). "The Citizen Browser Project—Auditing the Algorithms of Disinformation". The Markup. Retrieved 18 May 2022.
- ^ Pretus, Clara; Gil-Buitrago, Helena; Cisma, Irene; Hendricks, Rosamunde C.; Lizarazo-Villarreal, Daniela (16 July 2024). "Scaling crowdsourcing interventions to combat partisan misinformation". Advances.in/Psychology. 2: e85592. doi:10.56296/aip00018. ISSN 2976-937X.
- ^ Allen, Jennifer; Arechar, Antonio A.; Pennycook, Gordon; Rand, David G. (3 September 2021). "Scaling up fact-checking using the wisdom of crowds". Science Advances. 7 (36) eabf4393. Bibcode:2021SciA....7.4393A. doi:10.1126/sciadv.abf4393. ISSN 2375-2548. PMC 8442902. PMID 34516925.
- ^ Smith, Graham; Richards, Robert C.; Gastil, John (12 May 2015). "The Potential ofParticipediaas a Crowdsourcing Tool for Comparative Analysis of Democratic Innovations" (PDF). Policy & Internet. 7 (2): 243–262. doi:10.1002/poi3.93.
- ^ Moon, M. Jae (2018). "Evolution of co-production in the information age: crowdsourcing as a model of web-based co-production in Korea". Policy and Society. 37 (3): 294–309. doi:10.1080/14494035.2017.1376475. S2CID 158440300.
- ^ Taeihagh, Araz (8 November 2017). "Crowdsourcing: a new tool for policy-making?". Policy Sciences. 50 (4): 629–647. arXiv:1802.03113. doi:10.1007/s11077-017-9303-3. S2CID 27696037.
- ^
Diamond, Larry; Whittington, Zak (2009). "Social Media". In Welzel, Christian; Haerpfer, Christian W.; Bernhagen, Patrick; Inglehart, Ronald F. (eds.). Democratization (2 ed.). Oxford: Oxford University Press (published 2018). p. 256. ISBN 978-0-19-873228-0. Retrieved 4 March 2021.
Another way that social media can contribute to democratization is by 'crowdsourcing' information. This elicits the knowledge and wisdom of the 'crowd' [...].
- ^ Aitamurto, Tanja (2012). Crowdsourcing for Democracy: New Era In Policy–Making. Committee for the Future, Parliament of Finland. pp. 10–30. ISBN 978-951-53-3459-6.
- ^ Prpić, J.; Taeihagh, A.; Melton, J. (2014). "Crowdsourcing the Policy Cycle. Collective Intelligence 2014, MIT Center for Collective Intelligence" (PDF). Humancomputation.com. Archived from the original (PDF) on 24 June 2015. Retrieved 2 July 2015.
- ^ Prpić, J.; Taeihagh, A.; Melton, J. (2014). "A Framework for Policy Crowdsourcing. Oxford Internet Institute, University of Oxford – IPP 2014 – Crowdsourcing for Politics and Policy" (PDF). Ipp.oxii.ox.ac.uk. Retrieved 2 October 2018.
- ^ Prpić, J.; Taeihagh, A.; Melton, J. (2014). "Experiments on Crowdsourcing Policy Assessment. Oxford Internet Institute, University of Oxford – IPP 2014 – Crowdsourcing for Politics and Policy" (PDF). Ipp.oii.ox.ac.uk. Archived from the original (PDF) on 24 June 2015. Retrieved 2 July 2015.
- ^ Thapa, B.; Niehaves, B.; Seidel, C.; Plattfaut, R. (2015). "Citizen involvement in public sector innovation: Government and citizen perspectives". Information Polity. 20 (1): 3–17. doi:10.3233/IP-150351.
- ^ Aitamurto and Landemore (4 February 2015). "Five design principles for crowdsourced policymaking: Assessing the case of crowdsourced off-road traffic law reform in Finland". Journal of Social Media for Organizations (1): 1–19.
- ^ a b c Aitamurto, Tanja; Landemore, Hélène; Saldivar Galli, Jorge (2016). "Unmasking the Crowd: Participants' Motivation Factors, Profile and Expectations for Participation in Crowdsourced Policymaking". Information, Communication & Society. 20 (8): 1239–1260. doi:10.1080/1369118x.2016.1228993. S2CID 151989757.
- ^ Aitamurto, Tanja; Chen, Kaiping; Cherif, Ahmed; Galli, Jorge Saldivar; Santana, Luis (2016). "Civic CrowdAnalytics: Making sense of crowdsourced civic input with big data tools". Proceedings of the 20th International Academic Mindtrek Conference. pp. 86–94. doi:10.1145/2994310.2994366. ISBN 978-1-4503-4367-1. S2CID 16855773 – via ACM Digital Archive.
- ^ a b c Aitamurto, Tanja (31 January 2015). Crowdsourcing for Democracy: New Era in Policymaking. Committee for the Future, Parliament of Finland. ISBN 978-951-53-3459-6.
- ^ "Home". challenge.gov.
- ^ Nussbaum, Stan. (2003). Proverbial perspectives on pluralism. Connections: the journal of the WEA Missions Committee October, pp. 30, 31.
- ^ "Oromo dictionary project". OromoDictionary.com. Retrieved 3 February 2014.
- ^ Albright, Eric; Hatton, John (2007). Chapter 10. WeSay, a Tool for Engaging Native Speakers in Dictionary Building. Natl Foreign Lg Resource Ctr. hdl:10125/1368. ISBN 978-0-8248-3309-1.
- ^ "Developing ASL vocabulary for science and math". Washington.edu. 7 December 2012. Retrieved 3 February 2014.
- ^ Keuleers; et al. (February 2015). "Word knowledge in the crowd: Measuring vocabulary size and word prevalence in a massive online experiment". Quarterly Journal of Experimental Psychology. 68 (8): 1665–1692. doi:10.1080/17470218.2015.1022560. PMID 25715025. S2CID 4894686.
- ^ a b Bill, Jeremiah; Gong, He; Hamilton, Brooke; Hawthorn, Henry; et al. "The extension of (positive) anymore". Google Docs. Retrieved 27 September 2020.
- ^ "Pashto Proverb Collection project". AfghanProverbs.com. Archived from the original on 4 February 2014. Retrieved 3 February 2014.
- ^ "Comparing methods of collecting proverbs" (PDF). gial.edu. Archived from the original (PDF) on 17 December 2014. Retrieved 17 December 2014.
- ^ Edward Zellem. 2014. Mataluna: 151 Afghan Pashto Proverbs. Tampa, Florida: Culture Direct.
- ^ Zhai, Haijun; Lingren, Todd; Deleger, Louise; Li, Qi; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre (2013). "Web 2.0-based crowdsourcing for high-quality gold standard development in clinical Natural Language Processing". Journal of Medical Internet Research. 15 (4): e73. doi:10.2196/jmir.2426. PMC 3636329. PMID 23548263.
- ^ Martin, Fred; Resnick, Mitchel (1993), "Lego/Logo and Electronic Bricks: Creating a Scienceland for Children", Advanced Educational Technologies for Mathematics and Science, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 61–89, doi:10.1007/978-3-662-02938-1_2, ISBN 978-3-642-08152-1
- ^ Nishikawa, Hidehiko; Schreier, Martin; Fuchs, Christoph; Ogawa, Susumu (August 2017). "The Value of Marketing Crowdsourced New Products as Such: Evidence from Two Randomized Field Experiments". Journal of Marketing Research. 54 (4): 525–539. doi:10.1509/jmr.15.0244. ISSN 0022-2437.
- ^ Reinhold, Stephan; Dolnicar, Sara (December 2017), "How Airbnb Creates Value", Peer-to-Peer Accommodation Networks, Goodfellow Publishers, doi:10.23912/9781911396512-3602, ISBN 978-1-911396-51-2
- ^ "Prime Panels by CloudResearch | Online Research Panel Recruitment". CloudResearch. Retrieved 12 January 2023.
- ^ Nunan, Daniel; Birks, David F.; Malhotra, Naresh K. (2020). Marketing research: applied insight (6th ed.). Harlow, United Kingdom: Pearson. ISBN 978-1-292-30872-2. OCLC 1128061550.
- ^ Parker, Christopher J.; May, Andrew; Mitchell, Val (November 2013). "The role of VGI and PGI in supporting outdoor activities". Applied Ergonomics. 44 (6): 886–894. doi:10.1016/j.apergo.2012.04.013. PMID 22795180. S2CID 12918341.
- ^ Parker, Christopher J.; May, Andrew; Mitchell, Val (15 May 2014). "User-centred design of neogeography: the impact of volunteered geographic information on users' perceptions of online map 'mashups'". Ergonomics. 57 (7): 987–997. doi:10.1080/00140139.2014.909950. PMID 24827070. S2CID 13458260.
- ^ Brown, Michael; Sharples, Sarah; Harding, Jenny; Parker, Christopher J. (November 2013). "Usability of Geographic Information: Current challenges and future directions" (PDF). Applied Ergonomics. 44 (6): 855–865. doi:10.1016/j.apergo.2012.10.013. PMID 23177775. S2CID 26412254. Archived from the original (PDF) on 19 July 2018. Retrieved 20 August 2019.
- ^ Parker, Christopher J.; May, Andrew; Mitchell, Val (August 2012). "Understanding Design with VGI using an Information Relevance Framework". Transactions in GIS. 16 (4): 545–560. Bibcode:2012TrGIS..16..545P. doi:10.1111/j.1467-9671.2012.01302.x. S2CID 20100267.
- ^ Holley, Rose (March 2010). "Crowdsourcing: How and Why Should Libraries Do It?". D-Lib Magazine. 16 (3/4). doi:10.1045/march2010-holley. Retrieved 21 May 2021.
- ^ Trant, Jennifer (2009). Tagging, Folksonomy and Art Museums: Results of steve.museum's research (PDF). Archives & Museum Informatics. Archived from the original (PDF) on 10 February 2010. Retrieved 21 May 2021.
- ^ Andro, M. (2018). Digital libraries and crowdsourcing, Wiley / ISTE. ISBN 9781786301611.
- ^ Rahman, Mahbubur; Blackwell, Brenna; Banerjee, Nilanjan; Dharmendra, Saraswat (2015), "Smartphone-based hierarchical crowdsourcing for weed identification", Computers and Electronics in Agriculture, 113: 14–23, Bibcode:2015CEAgr.113...14R, doi:10.1016/j.compag.2014.12.012, retrieved 12 August 2015
- ^ "2015 Cheating Scandal". Bridge Winners. 2015. Retrieved 20 January 2024.
- ^ Tang, Weiming; Han, Larry; Best, John; Zhang, Ye; Mollan, Katie; Kim, Julie; Liu, Fengying; Hudgens, Michael; Bayus, Barry (1 June 2016). "Crowdsourcing HIV Test Promotion Videos: A Noninferiority Randomized Controlled Trial in China". Clinical Infectious Diseases. 62 (11): 1436–1442. doi:10.1093/cid/ciw171. PMC 4872295. PMID 27129465.
- ^ a b Zhang, Ye; Kim, Julie A.; Liu, Fengying; Tso, Lai Sze; Tang, Weiming; Wei, Chongyi; Bayus, Barry L.; Tucker, Joseph D. (November 2015). "Creative Contributory Contests to Spur Innovation in Sexual Health: 2 Cases and a Guide for Implementation". Sexually Transmitted Diseases. 42 (11): 625–628. doi:10.1097/OLQ.0000000000000349. PMC 4610177. PMID 26462186.
- ^ Créquit, Perrine (2018). "Mapping of Crowdsourcing in Health: Systematic Review". Journal of Medical Internet Research. 20 (5): e187. doi:10.2196/jmir.9330. PMC 5974463. PMID 29764795.
- ^ van der Krieke; et al. (2015). "HowNutsAreTheDutch (HoeGekIsNL): A crowdsourcing study of mental symptoms and strengths" (PDF). International Journal of Methods in Psychiatric Research. 25 (2): 123–144. doi:10.1002/mpr.1495. PMC 6877205. PMID 26395198. Archived from the original (PDF) on 2 August 2019. Retrieved 26 December 2018.
- ^ Prpić, J. (2015). Health Care Crowds: Collective Intelligence in Public Health. Collective Intelligence 2015. Center for the Study of Complex Systems, University of Michigan. Papers.ssrn.com. SSRN 2570593.
- ^ a b van der Krieke, L; Blaauw, FJ; Emerencia, AC; Schenk, HM; Slaets, JP; Bos, EH; de Jonge, P; Jeronimus, BF (2016). "Temporal Dynamics of Health and Well-Being: A Crowdsourcing Approach to Momentary Assessments and Automated Generation of Personalized Feedback (2016)" (PDF). Psychosomatic Medicine. 79 (2): 213–223. doi:10.1097/PSY.0000000000000378. PMID 27551988. S2CID 10955232.
- ^ Ess, Henk van (2010) "Crowdsourcing: how to find a crowd", ARD ZDF Akademie, Berlin, p. 99
- ^ a b c Doan, A.; Ramarkrishnan, R.; Halevy, A. (2011), "Crowdsourcing Systems on the World Wide Web" (PDF), Communications of the ACM, 54 (4): 86–96, doi:10.1145/1924421.1924442, S2CID 207184672
- ^ Brabham, Daren C. (2013), Crowdsourcing, MIT Press, p. 45
- ^ Blohm, Ivo; Zogaj, Shkodran; Bretschneider, Ulrich; Leimeister, Jan Marco (2018). "How to Manage Crowdsourcing Platforms Effectively" (PDF). California Management Review. 60 (2): 122–149. doi:10.1177/0008125617738255. S2CID 73551209. Archived from the original (PDF) on 20 July 2018. Retrieved 24 August 2020.
- ^ Howe, Jeff (2008), Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business (PDF), The International Achievement Institute, archived from the original (PDF) on 23 September 2015, retrieved 9 April 2012
- ^ Dahlander, Linus; Jeppesen, Lars Bo; Piezunka, Henning (1 January 2019), Sydow, Jörg; Berends, Hans (eds.), "How Organizations Manage Crowds: Define, Broadcast, Attract, and Select", Managing Inter-organizational Collaborations: Process Views, Research in the Sociology of Organizations, vol. 64, Emerald Publishing Limited, pp. 239–270, doi:10.1108/s0733-558x20190000064016, ISBN 978-1-78756-592-0, retrieved 19 April 2025
- ^ "Crowdvoting: How Elo Limits Disruption". thevisionlab.com. 25 May 2017.
- ^ Robson, John (24 February 2012). "IEM Demonstrates the Political Wisdom of Crowds". Canoe.ca. Archived from the original on 7 April 2012. Retrieved 31 March 2012.
- ^ "4 Great Examples of Crowdsourcing through Social Media". digitalagencymarketing.com. 2012. Archived from the original on 1 April 2012. Retrieved 29 March 2012.
- ^ Goldberg, Ken; Newsom, Gavin (12 June 2014). "Let's amplify California's collective intelligence". Citris-uc.org. Retrieved 14 June 2014.
- ^ Escoffier, N. and B. McKelvey (2014). "Using "Crowd-Wisdom Strategy" to Co-Create Market Value: Proof-of-Concept from the Movie Industry." in International Perspective on Business Innovation and Disruption in the Creative Industries: Film, Video, Photography, P. Wikstrom and R. DeFillippi, eds., UK: Edward Elgar Publishing Ltd, Chap. 11. ISBN 9781783475339
- ^ Block, A. B. (21 April 2010). "How boxoffice trading could flop". The Hollywood Reporter.
- ^ Chen, A. and Panaligan, R. (2013). "Quantifying movie magic with Google search." Google White Paper, Industry Perspectives+User Insights
- ^ Williams, Jack (17 February 2017). "An Indoor Football Team Has Its Fans Call the Plays". The New York Times. ISSN 0362-4331. Retrieved 7 February 2018.
- ^ Prive, Tanya. "What Is Crowdfunding And How Does It Benefit The Economy". Forbes.com. Retrieved 2 July 2015.
- ^ Choy, Katherine; Schlagwein, Daniel (2016), "Crowdsourcing for a better world: On the relation between IT affordances and donor motivations in charitable crowdfunding", Information Technology & People, 29 (1): 221–247, doi:10.1108/ITP-09-2014-0215, hdl:1959.4/unsworks_38196, S2CID 12352130
- ^ Barnett, Chance. "Crowdfunding Sites In 2014". Forbes.com. Retrieved 2 July 2015.
- ^ a b c Agrawal, Ajay; Catalini, Christian; Goldfarb, Avi (2014). "Some Simple Economics of Crowdfunding" (PDF). Innovation Policy and the Economy. 14. University of Chicago Press: 63–97. doi:10.1086/674021. hdl:1721.1/108043. ISSN 1531-3468. S2CID 16085029.
- ^ Leimeister, J.M.; Huber, M.; Bretschneider, U.; Krcmar, H. (2009), "Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition", Journal of Management Information Systems, 26 (1): 197–224, doi:10.2753/mis0742-1222260108, S2CID 17485373
- ^ Ebner, W.; Leimeister, J.; Krcmar, H. (September 2009). "Community Engineering for Innovations: The Ideas Competition as a Method to Nurture a Virtual Community for Innovations". R&D Management. 39 (4): 342–356. doi:10.1111/j.1467-9310.2009.00564.x. Retrieved 20 January 2024.
- ^ "DARPA Network Challenge". DARPA Network Challenge. Archived from the original on 11 August 2011. Retrieved 28 November 2011.
- ^ "Social media web snares 'criminals'". New Scientist. Retrieved 4 April 2012.
- ^ "Beyond XPrize: The 10 Best Crowdsourcing Tools and Technologies". 20 February 2012. Retrieved 30 March 2012.
- ^ Cunard, C. (19 July 2010). "The Movie Research Experience gets audiences involved in filmmaking." The Daily Bruin
- ^ MacArthur, Kate. "Squadhelp wants your company to crowdsource better names (and avoid Boaty McBoatface)". chicagotribune.com. Retrieved 28 August 2017.
- ^ "Compete To Create Your Dream Home". Co.Exist. FastCoexist.com. 4 June 2013. Retrieved 3 February 2014.
- ^ "Designers, clients forge ties on web". Boston Herald. 11 June 2012. Retrieved 3 February 2014.
- ^ Dolan, Shelagh, "Crowdsourced delivery explained: making same day shipping cheaper through local couriers.", Business Insider, archived from the original on 22 May 2018, retrieved 21 May 2018
- ^ Murison, Malek (19 April 2018), "LivingPackets uses IoT, crowdshipping to transform deliveries", Internet of Business, retrieved 19 April 2018
- ^ Biller, David; Sciaudone, Christina (19 June 2018), "Goldman Sachs, Soros Bet on the Uber of Brazilian Trucking", Bloomberg, retrieved 11 March 2019
- ^ Tyrsina, Radu, "Parcl Uses Trusted Forwarders to Bring you Products that don't Ship to your Country", Technology Personalised, archived from the original on 3 October 2015, retrieved 1 October 2015
- ^ Geiger D, Rosemann M, Fielt E. (2011) Crowdsourcing information systems: a systems theory perspective. Proceedings of the 22nd Australasian Conference on Information Systems.
- ^ Powell, D (2015). "A new tool for crowdsourcing". МИР (Модернизация. Инновации. Развитие). 6 (2-2 (22)). ISSN 2079-4665.
- ^ Yang, J.; Adamic, L.; Ackerman, M. (2008), "Crowdsourcing and knowledge sharing: Strategic user behavior on taskcn", Proceedings of the 9th ACM conference on Electronic commerce (PDF), pp. 246–255, doi:10.1145/1386790.1386829, ISBN 978-1-60558-169-9, S2CID 15553154, archived from the original (PDF) on 29 July 2020, retrieved 28 February 2012
- ^ "Mobile Crowdsourcing". Clickworker. Retrieved 10 December 2014.
- ^ Thebault-Spieker, Jacob; Terveen, Loren G.; Hecht, Brent (28 February 2015). "Avoiding the South Side and the Suburbs". Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. New York, NY, USA: ACM. pp. 265–275. doi:10.1145/2675133.2675278. ISBN 978-1-4503-2922-4.
- ^ Chatzimiloudis, Konstantinidis & Laoudias, Zeinalipour-Yazti. Crowdsourcing with smartphones (PDF).
- ^ Arkian, Hamid Reza; Diyanat, Abolfazl; Pourkhalili, Atefe (2017). "MIST: Fog-based data analytics scheme with cost-efficient resource provisioning for IoT crowdsensing applications". Journal of Network and Computer Applications. 82: 152–165. doi:10.1016/j.jnca.2017.01.012.
- ^ Felstiner, Alek (August 2011). "Working the Crowd: Employment and Labor Law in the Crowdsourcing Industry" (PDF). Berkeley Journal of Employment & Labor Law. 32: 150–151 – via WTF.
- ^ "View of Crowdsourcing: Libertarian Panacea or Regulatory Nightmare?". online-shc.com. Retrieved 26 May 2017.[permanent dead link]
- ^ WEI, F-F.; CHEN, W-N.; Guo, X-Q.; Zhao, B.; Jeon, S-W.; Zhang, J. (2024). "CrowdEC: Crowdsourcing-based Evolutionary Computation for Distributed Optimization". IEEE Transactions on Services Computing. 17 (6): 3286–3299. Bibcode:2024ITSCo..17.3286W. doi:10.1109/TSC.2024.3433487.
- ^ a b Ross, J.; Irani, L.; Silberman, M.S.; Zaldivar, A.; Tomlinson, B. (2010). "Who are the Crowdworkers? Shifting Demographics in Mechanical Turk" (PDF). Chi 2010. Archived from the original (PDF) on 1 April 2011. Retrieved 28 February 2012.
- ^ Huff, Connor; Tingley, Dustin (1 July 2015). ""Who are these people?" Evaluating the demographic characteristics and political preferences of MTurk survey respondents". Research & Politics. 2 (3): 205316801560464. doi:10.1177/2053168015604648. S2CID 7749084.
- ^ a b c d Moss, Aaron; Rosenzweig, Cheskie; Robinson, Jonathan; Jaffe, Shalom; Litman, Leib (2022). "Is it Ethical to Use Mechanical Turk for Behavioral Research? Relevant Data from a Representative Survey of MTurk Participants and Wages". psyarxiv.com. Retrieved 12 January 2023.
- ^ Levay, Kevin E.; Freese, Jeremy; Druckman, James N. (1 January 2016). "The Demographic and Political Composition of Mechanical Turk Samples". SAGE Open. 6 (1): 215824401663643. doi:10.1177/2158244016636433. S2CID 147299692.
- ^ Hirth, M.; Hoßfeld, T.; Train-Gia, P. (2011), Human Cloud as Emerging Internet Application – Anatomy of the Microworkers Crowdsourcing Platform (PDF)
- ^ a b c Brabham, Daren C. (2008). "Moving the Crowd at iStockphoto: The Composition of the Crowd and Motivations for Participation in a Crowdsourcing Application". First Monday. 13 (6). doi:10.5210/fm.v13i6.2159.
- ^ a b c Lakhani; et al. (2007). The Value of Openness in Scientific Problem Solving (PDF). Retrieved 26 February 2012.
- ^ Brabham, Daren C. (2012). "Managing Unexpected Publics Online: The Challenge of Targeting Specific Groups with the Wide-Reaching Tool of the Internet". International Journal of Communication. 6: 20.
- ^ a b Brabham, Daren C. (2010). "Moving the Crowd at Threadless: Motivations for Participation in a Crowdsourcing Application". Information, Communication & Society. 13 (8): 1122–1145. doi:10.1080/13691181003624090. S2CID 143402410.
- ^ a b Brabham, Daren C. (2012). "The Myth of Amateur Crowds: A Critical Discourse Analysis of Crowdsourcing Coverage". Information, Communication & Society. 15 (3): 394–410. doi:10.1080/1369118X.2011.641991. S2CID 145675154.
- ^ Saxton, Gregory D.; Oh, Onook; Kishore, Rajiv (2013). "Rules of Crowdsourcing: Models, Issues, and Systems of Control". Information Systems Management. 30: 2–20. CiteSeerX 10.1.1.300.8026. doi:10.1080/10580530.2013.739883. S2CID 16811686.
- ^ a b Aitamurto, Tanja (2015). "Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change, and Peer Learning". International Journal of Communication. 9: 3523–3543.
- ^ a b Kaufmann, N.; Schulze, T.; Viet, D. (2011). "More than fun and money. Worker Motivation in Crowdsourcing – A Study on Mechanical Turk" (PDF). Proceedings of the Seventeenth Americas Conference on Information Systems. Archived from the original (PDF) on 27 February 2012.
- ^ Brabham, Daren C. (2012). "Motivations for Participation in a Crowdsourcing Application to Improve Public Engagement in Transit Planning". Journal of Applied Communication Research. 40 (3): 307–328. doi:10.1080/00909882.2012.693940. S2CID 144807388.
- ^ Lietsala, Katri; Joutsen, Atte (2007). "Hang-a-rounds and True Believers: A Case Analysis of the Roles and Motivational Factors of the Star Wreck Fans". MindTrek 2007 Conference Proceedings.
- ^ a b Dahlander, Linus; Piezunka, Henning (1 June 2014). "Open to suggestions: How organizations elicit suggestions through proactive and reactive attention". Research Policy. Open Innovation: New Insights and Evidence. 43 (5): 812–827. doi:10.1016/j.respol.2013.06.006. ISSN 0048-7333.
- ^ "State of the World's Volunteerism Report 2011" (PDF). Unv.org. Archived from the original (PDF) on 2 December 2014. Retrieved 1 July 2015.
- ^ Chandler, D.; Kapelner, A. (2010). "Breaking Monotony with Meaning: Motivation in Crowdsourcing Markets" (PDF). Journal of Economic Behavior & Organization. 90: 123–133. arXiv:1210.0962. doi:10.1016/j.jebo.2013.03.003. S2CID 8563262.
- ^ Aparicio, M.; Costa, C.; Braga, A. (2012). "Proposing a system to support crowdsourcing". Proceedings of the Workshop on Open Source and Design of Communication (PDF). pp. 13–17. doi:10.1145/2316936.2316940. ISBN 978-1-4503-1525-8. S2CID 16494503.
- ^ Ipeirotis, Panagiotis G. (10 March 2010). Demographics of Mechanical Turk.
- ^ Ross, Joel; Irani, Lilly; Silberman, M. Six; Zaldivar, Andrew; Tomlinson, Bill (10 April 2010). "Who are the crowdworkers?". CHI '10 Extended Abstracts on Human Factors in Computing Systems. CHI EA '10. New York, USA: Association for Computing Machinery. pp. 2863–2872. doi:10.1145/1753846.1753873. ISBN 978-1-60558-930-5. S2CID 11386257.
- ^ Quinn, Alexander J.; Bederson, Benjamin B. (2011). "Human Computation:A Survey and Taxonomy of a Growing Field, CHI 2011 [Computer Human Interaction conference], May 7–12, 2011, Vancouver, BC, Canada" (PDF). Retrieved 30 June 2015.
- ^ a b Hauser, David J.; Moss, Aaron J.; Rosenzweig, Cheskie; Jaffe, Shalom N.; Robinson, Jonathan; Litman, Leib (3 November 2022). "Evaluating CloudResearch's Approved Group as a solution for problematic data quality on MTurk". Behavior Research Methods. 55 (8): 3953–3964. doi:10.3758/s13428-022-01999-x. PMC 10700412. PMID 36326997.
- ^ Prpić, J; Shukla, P.; Roth, Y.; Lemoine, J.F. (2015). "A Geography of Participation in IT-Mediated Crowds". Proceedings of the Hawaii International Conference on Systems Sciences 2015. SSRN 2494537.
- ^ Dahlander, Linus; Piezunka, Henning (9 December 2020). "Why crowdsourcing fails". Journal of Organization Design. 9 (1): 24. doi:10.1186/s41469-020-00088-7. hdl:10419/252174. ISSN 2245-408X.
- ^ "How Generative AI Can Augment Human Creativity". Harvard Business Review. 16 June 2023. ISSN 0017-8012. Retrieved 20 June 2023.
- ^ a b Borst, Irma. "The Case For and Against Crowdsourcing: Part 2". Archived from the original on 12 September 2015. Retrieved 9 February 2015.
- ^ Ipeirotis; Provost; Wang (2010). Quality Management on Amazon Mechanical Turk (PDF). Archived from the original (PDF) on 9 August 2012. Retrieved 28 February 2012.
- ^ Lukyanenko, Roman; Parsons, Jeffrey; Wiersma, Yolanda (2014). "The IQ of the Crowd: Understanding and Improving Information Quality in Structured User-Generated Content". Information Systems Research. 25 (4): 669–689. doi:10.1287/isre.2014.0537.
- ^ Hauser, David; Paolacci, Gabriele; Chandler, Jesse (15 April 2019), "Evidence and Solutions", Handbook of Research Methods in Consumer Psychology, doi:10.4324/9781351137713-17, ISBN 978-1-351-13771-3, S2CID 150882624, retrieved 12 January 2023
- ^ a b Moss, Aaron J; Rosenzweig, Cheskie; Jaffe, Shalom Noach; Gautam, Richa; Robinson, Jonathan; Litman, Leib (11 June 2021). Bots or inattentive humans? Identifying sources of low-quality data in online platforms. doi:10.31234/osf.io/wr8ds. S2CID 236288817.
- ^ Goerzen, Thomas; Kundisch, Dennis (11 August 2016). "Can the Crowd Substitute Experts in Evaluation of Creative Ideas? An Experimental Study Using Business Models". AMCIS 2016 Proceedings.
- ^ Burnap, Alex; Ren, Alex J.; Papazoglou, Giannis; Gerth, Richard; Gonzalez, Richard; Papalambros, Panos. When Crowdsourcing Fails: A Study of Expertise on Crowdsourced Design Evaluation (PDF). Archived from the original (PDF) on 29 October 2015. Retrieved 19 May 2015.
- ^ Kurve, Aditya; Miller, David J.; Kesidis, George (30 May 2014). "Multicategory Crowdsourcing Accounting for Variable Task Difficulty, Worker Skill, and Worker Intention". IEEE Kde (99).
- ^ Hirth; Hoßfeld; Tran-Gia (2011), Human Cloud as Emerging Internet Application – Anatomy of the Microworkers Crowdsourcing Platform (PDF)
- ^ PhD, Aaron Moss (18 September 2018). "After the Bot Scare: Understanding What's Been Happening With Data Collection on MTurk and How to Stop It". CloudResearch. Retrieved 12 January 2023.
- ^ Ipeirotis, Panagiotis G. (2010). "Analyzing the Amazon Mechanical Turk Marketplace" (PDF). XRDS: Crossroads, the ACM Magazine for Students. 17 (2): 16–21. doi:10.1145/1869086.1869094. S2CID 6472586. SSRN 1688194. Retrieved 2 October 2018.
- ^ a b Hosaka, Tomoko A. (April 2008). "Facebook asks users to translate for free". NBC News.
- ^ Britt, Darice. "Crowdsourcing: The Debate Roars On". Archived from the original on 1 July 2014. Retrieved 4 December 2012.
- ^ Woods, Dan (28 September 2009). "The Myth of Crowdsourcing". Forbes. Retrieved 4 December 2012.
- ^ a b Aitamurto, Tanja; Leiponen, Aija. "The Promise of Idea Crowdsourcing: Benefits, Contexts, Limitations". Ideasproject.com. Retrieved 2 July 2015.
- ^ "International Translators Association Launched in Argentina". Latin American Herald Tribune. Archived from the original on 11 March 2021. Retrieved 23 November 2016.
- ^ Kleeman, Frank (2008). "Un(der)paid Innovators: The Commercial Utilization of Consumer Work through Crowdsourcing". Sti-studies.de. Retrieved 2 July 2015.
- ^ Jason (2011). "Crowdsourcing: A Million Heads is Better Than One". Crowdsourcing.org. Archived from the original on 3 July 2015. Retrieved 2 July 2015.
- ^ Dupree, Steven (2014). "Crowdfunding 101: Pros and Cons". Gsb.stanford.edu. Retrieved 2 July 2015.
- ^ "Fair Labor Standards Act Advisor". Archived from the original on 30 May 2000. Retrieved 28 February 2012.
- ^ Hara, Kotaro; Adams, Abigail; Milland, Kristy; Savage, Saiph; Callison-Burch, Chris; Bigham, Jeffrey P. (21 April 2018). "A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk". Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York, USA: ACM. pp. 1–14. arXiv:1712.05796. doi:10.1145/3173574.3174023. ISBN 978-1-4503-5620-6. S2CID 5040507.
- ^ Greg Norcie, 2011, "Ethical and practical considerations for compensation of crowdsourced research participants", CHI WS on Ethics Logs and VideoTape: Ethics in Large Scale Trials & User Generated Content, [1], accessed 30 June 2015.
- ^ Busarovs, Aleksejs (2013). "Ethical Aspects of Crowdsourcing, or is it a Modern Form of Exploitation" (PDF). International Journal of Economics & Business Administration. 1 (1): 3–14. doi:10.35808/ijeba/1. Retrieved 26 November 2014.
- ^ Paolacci, G; Chandler, J; Ipeirotis, P.G. (2010). "Running experiments on Amazon Mechanical Turk". Judgment and Decision Making. 5 (5): 411–419. doi:10.1017/S1930297500002205. hdl:1765/31983. S2CID 14476283.
- ^ Graham, Mark; Hjorth, Isis; Lehdonvirta, Vili (1 May 2017). "Digital labour and development: impacts of global digital labour platforms and the gig economy on worker livelihoods". Transfer: European Review of Labour and Research. 23 (2): 135–162. doi:10.1177/1024258916687250. PMC 5518998. PMID 28781494.
- ^ The Crowdsourcing Scam (Dec. 2014), The Baffler, No. 26
- ^ Salehi; et al. (2015). We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers (PDF). Archived from the original (PDF) on 17 June 2015. Retrieved 16 June 2015.
- ^ Irani, Lilly C.; Silberman, M. Six (27 April 2013). "Turkopticon". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, USA: ACM. pp. 611–620. doi:10.1145/2470654.2470742. ISBN 978-1-4503-1899-0. S2CID 207203679.
- ^ Muldoon, J. Graham, M., and Cant, C. Feeding the Machine: The Hidden Human Labour Powering AI. Canongate, 2024.
- ^ Shmueli, Boaz; Fell, Jan; Ray, Soumya; Ku, Lun-Wei (2021). "Beyond Fair Pay: Ethical Implications of NLP Crowdsourcing". Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. pp. 3758–3769. doi:10.18653/v1/2021.naacl-main.295. S2CID 233307331.
External links
[edit]
Crowdsourcing at Wikibooks
Media related to Crowdsourcing at Wikimedia Commons
Crowdsourcing
View on GrokipediaDefinition and Core Concepts
Formal Definition and Distinctions
Crowdsourcing is defined as the act of transferring a function traditionally performed by an employee or contractor to an undefined, generally large group of people via an open call, often leveraging internet platforms to aggregate contributions of ideas, labor, or resources.[11][7] This concept was coined by journalist Jeff Howe in a 2006 Wired magazine article, combining "crowd" and "outsourcing" to describe a distributed problem-solving model that emerged with digital connectivity.[12] Core to the definition are four elements: an identifiable organization or sponsor issuing the call; a task amenable to distributed execution; an undefined pool of potential solvers drawn from the public; and a mechanism for aggregating and evaluating contributions, which may involve incentives like monetary rewards or recognition.[13] Unlike traditional outsourcing, which contracts specific, predefined entities or firms for specialized work with negotiated terms, crowdsourcing solicits input from an anonymous, self-selecting multitude without prior selection, emphasizing scalability and diversity over reliability of a fixed provider.[14][15] This distinction arises from causal differences in coordination: outsourcing relies on hierarchical contracts and accountability to a bounded group, whereas crowdsourcing exploits the statistical law of large numbers for emergent solutions, though it risks lower individual accountability and variable quality.[16] Crowdsourcing further differs from open-source development, which typically involves voluntary, peer-driven collaboration on shared codebases by a self-organizing community of experts, often without a central sponsor directing specific tasks.[17] In crowdsourcing, the sponsor retains control over task definition and selection, potentially compensating participants selectively, whereas open source prioritizes communal ownership and iterative forking without monetary exchange as the primary motivator.[18] It also contrasts with user-generated content platforms, where contributions are unsolicited and platform-agnostic, as crowdsourcing structures participation around explicit, bounded problems to harness targeted collective output.[5] These boundaries highlight crowdsourcing's reliance on mediated openness for efficiency gains, grounded in empirical observations of platforms like Amazon Mechanical Turk, launched in 2005, which formalized micro-task distribution to global workers.[1]Underlying Principles
Crowdsourcing operates on the principle that distributed groups of individuals, when properly structured, can generate superior solutions, predictions, or judgments compared to isolated experts or centralized authorities, a phenomenon rooted in the aggregation of diverse, independent inputs. This draws from the "wisdom of crowds" concept, empirically demonstrated in Francis Galton's 1906 observation at a county fair where 787 attendees guessed the dressed weight of an ox; the average estimate of 1,197 pounds deviated by just 0.8% from the actual 1,199 pounds, illustrating how uncorrelated errors tend to cancel out in large samples.[19] The mechanism relies on statistical properties: individual biases or inaccuracies, if not systematically correlated, diminish through averaging, yielding a collective estimate with reduced variance akin to the law of large numbers applied to judgments.[20] James Surowiecki formalized the conditions enabling this in his 2004 analysis, identifying four essential elements: diversity of opinion, which introduces varied perspectives to mitigate uniform blind spots; independence, preventing conformity or herding that amplifies errors; decentralization, allowing local knowledge to inform contributions without top-down distortion; and aggregation, via simple mechanisms like voting or averaging to synthesize inputs into coherent outputs. In crowdsourcing applications, platforms enforce these by issuing open calls to heterogeneous participants—often strangers with no prior coordination—to submit independent responses, then computationally aggregate them, as seen in prediction markets or idea contests where crowd forecasts have outperformed individual analysts by margins of 10-30% in domains like election outcomes or economic indicators.[21] Causal realism underscores that success hinges on these conditions; violations, such as informational cascades where early opinions sway later ones, revert crowds to the quality of their most influential subset, as evidenced by experiments where deliberation without independence increases error rates by up to 20%.[20] Thus, effective crowdsourcing designs incorporate incentives for truthful revelation—monetary rewards calibrated to task complexity or reputational feedback—to sustain independence and participation, while filtering for diversity through broad recruitment rather than homogeneous networks. Empirical studies confirm that crowds under these principles solve complex problems, such as image labeling or optimization tasks, with accuracy rivaling specialized algorithms when scaled to thousands of contributors.[22]Historical Development
Pre-Modern Precursors
In ancient Greece, the agora functioned as a central public forum from the 6th century BC onward, where citizens gathered for announcements, debates, and the exchange of ideas on governance, trade, and community issues, enabling distributed input from a broad populace prior to formalized hierarchies dominating decision-making.[23] During China's Tang Dynasty (618–907 AD), joint-stock companies emerged as an early financing model, allowing multiple individuals to contribute capital to large-scale enterprises such as maritime expeditions or infrastructure projects, distributing risk and rewards across participants in a manner resembling proto-crowdfunding.[24] In 1567, King Philip II of Spain launched an open competition with a cash prize for the best design of a fortified city to counter Dutch revolts, soliciting architectural and defensive proposals from engineers and experts across his empire, which demonstrated the efficacy of monetary incentives in aggregating specialized knowledge from a dispersed group.[25] These instances relied on public dissemination of problems and rewards to motivate voluntary contributions, though limited by communication constraints and elite oversight, they prefigured crowdsourcing by leveraging collective capacities beyond centralized authority for practical solutions.[26]19th-20th Century Examples
In the mid-19th century, the compilation of the Oxford English Dictionary represented a pioneering effort to crowdsource linguistic documentation. Initiated by the Philological Society in 1857, the project solicited volunteers worldwide to extract and submit quotation slips from books and other printed sources, illustrating historical word usage, meanings, and etymologies.[27] James Murray, appointed chief editor in 1879, systematized the influx of contributions, which ultimately exceeded five million slips from thousands of participants, including amateurs, scholars, and readers across social classes.[28] This distributed labor enabled the dictionary's incremental publication starting with fascicles in 1884, culminating in the complete 10-volume first edition in 1928, though delays arose from the volume of unverified submissions and editorial rigor.[29] Meteorological data collection in the 19th century also harnessed dispersed volunteer networks, prefiguring modern citizen science as a form of crowdsourcing for empirical observation. In the United States, the Smithsonian Institution under Secretary Joseph Henry coordinated a voluntary observer corps from the 1840s, with participants recording daily weather metrics like temperature, pressure, and precipitation at remote stations.[30] This expanded under the U.S. Army Signal Corps in 1870, which oversaw approximately 500 stations—many operated by unpaid civilians—yielding datasets for national weather maps and storm predictions until the Weather Bureau's formation in 1891.[30] Similar initiatives in Britain, supported by the Royal Society and local scientific societies, relied on amateur meteorologists to furnish observations, compensating for the limitations of centralized instrumentation and enabling broader spatial coverage for climate analysis.[31] Into the 20th century, prize competitions emerged as structured crowdsourcing for technological breakthroughs, exemplified by aviation incentives. The Orteig Prize, announced in 1919 by hotelier Raymond Orteig, offered $25,000 (equivalent to about $450,000 in 2023 dollars) for the first nonstop flight between New York City and Paris, attracting entrants who iterated on aircraft designs and navigation methods.[32] Charles Lindbergh claimed the award on May 21, 1927, after eight years of competition spurred advancements in monoplane construction and long-range fuel systems.[32] Concurrently, social research projects like Mass-Observation, founded in Britain in 1937 by anthropologists Tom Harrisson and Charles Madge alongside poet Humphrey Jennings, crowdsourced behavioral data through a panel of around 500 volunteer observers who maintained diaries and conducted unobtrusive public surveillance.[33] This yielded thousands of reports on everyday attitudes and habits until the organization's core activities waned in the early 1950s, providing raw material for sociological insights amid World War II rationing and morale studies.[34]Emergence in the Digital Age (2000s Onward)
The advent of widespread internet access and Web 2.0 technologies in the early 2000s facilitated the shift of crowdsourcing from niche applications to scalable digital platforms, enabling organizations to tap distributed networks for tasks ranging from content creation to problem-solving.[25] Early examples included Threadless, launched in 2000, which crowdsourced t-shirt designs by soliciting submissions from artists and using community votes to select designs for production and sale.[35] Similarly, iStockphoto, also founded in 2000, allowed amateur photographers to upload and sell stock images, disrupting traditional agencies by aggregating user-generated visual content.[35] The term "crowdsourcing" was formally coined in June 2006 by journalist Jeff Howe in a Wired magazine article, defining it as the act of outsourcing tasks once performed by specialized employees to a large, undefined crowd over the internet, often for lower costs and innovative outcomes.[2] This conceptualization built on prior platforms like InnoCentive, established in 2001 as a spin-off from Procter & Gamble, which posted scientific and technical challenges to a global network of solvers, awarding prizes for solutions to R&D problems that internal teams could not resolve.[35] Wikipedia, launched in January 2001, exemplified collaborative knowledge production by permitting anonymous volunteers to edit articles, resulting in a repository exceeding 6 million English-language entries by amassing incremental contributions from millions of users.[36] Amazon Mechanical Turk (MTurk), publicly beta-launched on November 2, 2005, marked a pivotal development in microtask crowdsourcing, providing a marketplace for "human intelligence tasks" (HITs) such as image labeling, transcription, and surveys, completed by remote workers for micropayments, which enabled automation of processes requiring human judgment at reduced scale compared to full-time hires.[37] By the late 2000s, these mechanisms expanded into crowdfunding, with Kickstarter's founding in 2009 introducing reward-based funding models where creators pitched projects to backers, who pledged small amounts in exchange for prototypes or perks, channeling over $8 billion in commitments to hundreds of thousands of initiatives by the 2020s.[38] Such platforms demonstrated crowdsourcing's efficiency in leveraging voluntary or incentivized participation, though they also highlighted challenges like quality control and worker exploitation in low-pay tasks.[39]Theoretical Foundations
Economic Incentives and Participant Motivations
Economic incentives in crowdsourcing encompass monetary payments designed to elicit contributions from distributed participants, addressing challenges such as low coordination and free-riding inherent in decentralized systems. Microtask platforms like Amazon Mechanical Turk employ piece-rate compensation, where workers receive payments ranging from $0.01 to $0.10 per human intelligence task (HIT), yielding median hourly earnings of $3.01 for U.S.-based workers and $1.41 for those in India, based on analyses of platform data.[40] [41] These rates reflect requester-set pricing, which prioritizes cost efficiency but often results in effective wages below minimum standards in high-income countries.[41] In prize contests, such as those hosted on InnoCentive, incentives take the form of fixed bounties awarded to top solutions, with typical prizes averaging $20,000 and select challenges offering up to $100,000 or more for breakthroughs in areas like desalination or resilience technologies. [42] Such economic mechanisms primarily influence participation volume rather than output quality, as empirical experiments demonstrate that higher bonuses increase task completion rates but yield negligible improvements in accuracy or effort.[43] For instance, field studies on crowdsourcing platforms show that financial rewards mitigate dropout in low-skill tasks but fail to sustain high-effort contributions without complementary designs like performance thresholds or lotteries.[44] Non-monetary economic variants, including reputational credits convertible to future opportunities or self-selected rewards like vouchers, have been tested to enhance engagement; one multi-study analysis found ideators prefer flexible non-cash options when available, potentially boosting solution diversity over pure cash payouts.[45] Participant motivations in crowdsourcing extend beyond economics to include intrinsic drivers like task enjoyment, skill acquisition, and social recognition, alongside extrinsic factors such as altruism and community belonging. A meta-analysis of quantitative studies across platforms reveals that intrinsic motivations, particularly enjoyment, exhibit stronger correlations with sustained participation (effect sizes around 0.30-0.40) than purely financial incentives in voluntary or contest-based settings.[46] Gender and experience moderate these effects; for example, novices may prioritize monetary gains, while experts in ideation contests respond more to recognition and challenge complexity.[47] Empirical surveys of users on online platforms classify motivations into reward-oriented (e.g., cash or status) and requirement-oriented (e.g., problem-solving autonomy) categories, with the former dominating microtasks and the latter prevailing in open innovation where participants self-select high-value problems.[48] [44] Hybrid motivations often yield optimal outcomes, as pure economic incentives risk attracting low-quality contributors or encouraging strategic withholding, while intrinsic appeals foster long-term ecosystems. Studies on contest platforms indicate that combining prizes with public acknowledgment increases solver diversity and solution appropriateness, though over-reliance on money can crowd out voluntary contributions in domains like citizen science.[49] Systematic reviews of motivational theories applied to crowdsourcing highlight the long-tail distribution of engagement, where a minority of highly motivated participants (driven by passion or reputation) generate disproportionate value, underscoring the limits of uniform economic incentives.[50]Mechanisms of Collective Intelligence
Collective intelligence in crowdsourcing emerges when mechanisms systematically harness diverse individual inputs to produce judgments or solutions that surpass those of solitary experts or centralized decision-making. These mechanisms rely on foundational conditions outlined by James Surowiecki, including diversity of opinion—where participants bring varied perspectives to counteract uniform biases—independence of judgments to prevent informational cascades, decentralization to incorporate localized knowledge, and effective aggregation to synthesize inputs into coherent outputs. Failure in any condition, such as excessive interdependence, can lead to groupthink and diminished accuracy, as observed in scenarios where social influence overrides private information.[51] Empirical evidence underscores these principles' efficacy under proper implementation. In Francis Galton's 1907 analysis of a livestock fair contest, 787 participants guessed the dressed weight of an ox; the crowd's mean estimate of 1,197 pounds deviated by just 1 pound from the true 1,198 pounds, illustrating how averaging independent estimates aggregates probabilistic accuracy despite individual errors.[52] Similarly, in controlled simulations of crowdsourcing as collective problem-solving, intelligence manifests through balanced collaboration: small groups (around 5 members) excel in easy tasks via high collectivism, while larger assemblies (near 50 participants) optimize for complex problems by mitigating free-riding through fitness-based selection, yielding higher overall capacity than purely individualistic or overly collective approaches.[53] Aggregation techniques form the operational core, transforming raw contributions into reliable intelligence. For quantitative estimates, simple averaging or median calculations suffice when independence holds, as in prediction tasks; for categorical judgments, majority voting or probabilistic models like Dawid-Skene— which infer true labels from worker reliability estimates—enhance precision in noisy data environments.[54] In decentralized platforms, mechanisms such as iterative synthesis allow parallel idea generation followed by sequential refinement, fostering emergent quality; evaluative voting then filters outputs, as seen in architectural crowdsourcing where network-based systems reduced design deviation from optimal artifacts (e.g., collective distance metric dropping from 0.514 to 0.283 over 10 iterations with 6 contributors).[55] Prediction markets extend this by aggregating via incentive-aligned trading, where share prices reflect crowd consensus probabilities, often outperforming polls in forecasting events like elections.[56] These mechanisms' success hinges on causal factors like participant incentives and task structure, with empirical studies showing that hybrid approaches—combining discussive elements (e.g., Q&A for clarification) with synthetic iteration—outperform solo efforts in creative domains, provided diversity is maintained to avoid convergence on suboptimal local optima.[55] In practice, platforms mitigate biases through anonymity or randomized ordering to preserve independence, though real-world deviations, such as homogeneous participant pools, can undermine outcomes, emphasizing the need for deliberate design over naive scaling.[53]Comparative Advantages Over Traditional Hierarchies
Crowdsourcing leverages the collective intelligence of diverse participants, often yielding superior outcomes compared to the centralized decision-making in traditional hierarchies, where information bottlenecks and cognitive biases limit effectiveness. James Surowiecki's framework in The Wisdom of Crowds posits that under conditions of diversity of opinion, independence, decentralization, and effective aggregation, group judgments outperform individual experts or hierarchical elites, as demonstrated in empirical examples like market predictions and estimation tasks where crowds achieved errors as low as 1-2% versus experts' higher variances.[57][58] This advantage stems from crowdsourcing's ability to draw from a broader knowledge base, mitigating the "status-knowledge disconnect" prevalent in hierarchies where deference to authority suppresses novel insights.[58] In terms of speed, crowdsourcing enables parallel processing of problems by distributing tasks across a global pool, contrasting with the serial workflows of hierarchical organizations that constrain innovation to internal layers of approval. Studies indicate that crowdsourcing platforms facilitate rapid idea generation and iteration, with organizations reporting faster problem resolution—often in weeks rather than months—due to real-time contributions from thousands of participants.[59][60] For instance, in innovation contests, crowd-sourced solutions emerge 2-5 times quicker than internal R&D cycles in firms reliant on top-down directives.[61] Cost advantages arise from outcome-based incentives, such as prizes or micro-payments, which avoid the overhead of maintaining salaried hierarchies; empirical analyses show crowdsourcing reduces expenses by 50-90% for tasks like data labeling or design challenges while scaling to volumes unattainable internally.[17] This model accesses specialized skills on-demand without long-term commitments, particularly beneficial for knowledge-based industries where traditional hiring lags behind dynamic needs.[60] Furthermore, crowdsourcing fosters organizational learning across individual, group, and firm levels by integrating external feedback loops, enhancing adaptability in ways hierarchies struggle with due to insular information flows. Quantitative evidence from local governments and firms reveals positive correlations between crowd participation mechanisms—like voting and creation—and improved learning outcomes, with effect sizes indicating 20-30% gains in knowledge acquisition over siloed approaches.[62] These benefits, however, depend on robust aggregation to filter noise, underscoring crowdsourcing's edge in harnessing distributed cognition absent in rigid command structures.[62]Types and Mechanisms
Explicit Crowdsourcing Methods
Explicit crowdsourcing methods involve the intentional solicitation of contributions from a distributed group of participants who are aware of their role in addressing defined tasks or challenges, typically through structured platforms that facilitate task assignment, evaluation, and aggregation. These approaches contrast with implicit methods by requiring active, deliberate engagement, often motivated by financial incentives, prizes, recognition, or voluntary interest. Common implementations include microtask marketplaces, prize contests, and volunteer-based collaborations, enabling organizations to leverage collective effort for scalable outcomes in data processing, innovation, and research.[63] Microtasking platforms represent a core explicit method, breaking complex work into discrete, low-skill units such as image annotation, transcription, or sentiment analysis, distributed to workers via online marketplaces. Amazon Mechanical Turk, launched on November 2, 2005, pioneered this model by providing requesters access to a global pool of participants for human intelligence tasks (HITs), with payments typically ranging from cents to dollars per task. By enabling rapid completion of repetitive yet judgment-requiring activities, MTurk has supported applications in machine learning data labeling and market research, though worker compensation averages below minimum wage in many cases due to competitive bidding.[37][64][65] Prize contests form another explicit mechanism, where problem owners post challenges with monetary rewards for optimal solutions, attracting specialized solvers from diverse fields. InnoCentive, developed from Eli Lilly's internal R&D outsourcing experiments in the early 2000s and publicly operational since 2007, exemplifies this by hosting open calls for technical innovations, with awards often exceeding $100,000. The platform has facilitated over 2,500 solved challenges across industries like pharmaceuticals and materials science, achieving an 80% success rate by drawing on a network of more than 400,000 solvers as of 2025. Such contests promote efficient resource allocation, as payment occurs only upon success, though they may favor incremental over radical breakthroughs due to predefined criteria.[66][67] Volunteer collaborations constitute a non-monetary explicit variant, relying on intrinsic motivations like scientific curiosity or community building to elicit contributions for knowledge-intensive tasks. Galaxy Zoo, a citizen science project launched in July 2007, engages participants in classifying galaxy morphologies from Sloan Digital Sky Survey images, amassing classifications for over 125 million galaxies by 2017 and enabling discoveries such as unusual galaxy types leading to more than 60 peer-reviewed papers. This method harnesses domain-specific expertise from non-professionals, yielding high-volume outputs at low cost, but requires robust quality controls like consensus voting to mitigate errors from untrained contributors.[68][69]Implicit and Hybrid Approaches
Implicit crowdsourcing harnesses contributions from participants unaware of their role in data aggregation or problem-solving, relying on passive behaviors such as app interactions, sensor readings, or social media engagements rather than deliberate tasks.[70] This method extracts value from incidental user actions, like location traces from smartphones or implicit feedback in games, to build datasets or models without explicit recruitment or incentives.[71] Unlike explicit crowdsourcing, it minimizes participant burden but requires robust backend algorithms to infer and validate signals from noisy, unstructured inputs.[72] Key mechanisms include behavioral observation and automated labeling; for instance, in Wi-Fi indoor localization, implicit crowdsourcing collects radio fingerprints from pedestrians' devices during normal movement, labeling them via contextual data like floor changes detected by sensors, achieving maps with 80-90% accuracy in tested environments as of 2021.[73] Another application identifies abusive content in social networks by monitoring natural user blocks or reports as implicit signals, with a 2020 framework reporting detection rates up to 85% by aggregating these without user prompts.[72] Similarly, rumor detection leverages sharing patterns and credibility cues from user interactions, as demonstrated in a 2020 IEEE study on Twitter data where implicit metrics outperformed some explicit labeling baselines.[74] Hybrid crowdsourcing blends implicit and explicit techniques, or integrates human crowds with algorithmic processes, to balance scale, accuracy, and cost.[75] This approach often uses implicit data for broad coverage and explicit input for verification, or employs crowds to refine machine outputs iteratively. For example, in network visualization for biological data, the 2021 Flud system combines crowd-sourced layout adjustments with energy-minimizing algorithms, reducing optimization time by 40-60% over pure computational methods in experiments on protein interaction graphs.[75] In geophysics, hybrid methods merge crowdsourced seismic recordings from smartphones with professional sensors, as reviewed in a 2018 analysis showing improved earthquake detection resolution by integrating voluntary explicit submissions with implicit device vibrations, covering gaps in traditional networks.[76] For weather estimation, the Atmos framework of 2013 uses participatory sensing where explicit user reports hybridize with implicit mobile sensor streams, yielding precipitation estimates within 10-20% error margins in urban tests.[77] These hybrids mitigate limitations like implicit data sparsity through targeted explicit interventions, enhancing overall reliability in dynamic environments.[78]Specialized Variants (e.g., Crowdfunding, Prize Contests)
Crowdfunding constitutes a financial variant of crowdsourcing, whereby project initiators appeal to a dispersed online audience for small monetary pledges to realize ventures ranging from creative endeavors to startups, often in exchange for rewards or equity.[79] This mechanism diverges from general crowdsourcing by prioritizing capital aggregation over contributions of ideas, skills, or content, with campaigns typically featuring fixed deadlines and all-or-nothing funding models to mitigate partial fulfillment risks.[80] The approach gained traction post-2008 financial crisis as an alternative to traditional venture capital, with platforms like Kickstarter—launched in April 2009—enabling over 650,000 projects and accumulating approximately $7 billion in pledges by 2023.[81] Globally, the crowdfunding sector expanded to $20.3 billion in transaction volume by 2023, driven by reward-based, equity, and debt models, though success rates hover around 40-50% due to factors like market saturation and unproven viability.[82] Prize contests represent another specialized crowdsourcing modality, deploying fixed monetary incentives to solicit solutions from broad participant pools for complex challenges, thereby harnessing competitive dynamics to accelerate breakthroughs unattainable via conventional R&D.[83] Participants invest resources upfront without guaranteed remuneration, with awards disbursed solely to those meeting rigorous, verifiable milestones, which incentivizes high-risk innovation while minimizing sponsor costs until success.[84] The XPRIZE Foundation, founded in 1996 by Peter Diamandis, pioneered modern iterations, issuing over $250 million in prize purses across 30 competitions by 2024, including the $10 million Ansari XPRIZE claimed in 2004 by SpaceShipOne for suborbital flight and the $100 million Carbon Removal XPRIZE awarded on April 23, 2025, to teams demonstrating gigaton-scale CO2 extraction.[85] [86] Complementary examples include NASA's Centennial Challenges, initiated in 2005, which have distributed over $50 million for advancements in robotics and propulsion, and historical precedents like the 1714 Longitude Prize yielding John Harrison's marine chronometer for navigational accuracy.[87] These variants extend crowdsourcing's core by aligning participant efforts with tangible outputs—funds in crowdfunding or prototypes in prizes—yet both face scalability limits from participant fatigue and selection biases favoring viral appeal over substantive merit. Empirical analyses indicate prize contests yield 10-30 times the investment in spurred advancements compared to grants, though outcomes depend on clear criteria and diverse entrant pools.[88] Crowdfunding, meanwhile, democratizes access but amplifies risks of fraud or unfulfilled promises, with regulatory frameworks like the U.S. JOBS Act of 2012 enabling equity models while imposing disclosure mandates.[89]Applications and Case Studies
Business and Product Innovation
Crowdsourcing has been applied in business and product innovation to source ideas, designs, and solutions from distributed networks of participants, often reducing internal R&D costs and accelerating development cycles. Companies post challenges or solicit submissions on platforms, evaluating contributions based on community feedback, expert review, or market potential. Empirical studies indicate that such approaches can yield higher innovation success rates by tapping diverse external expertise, though outcomes depend on effective incentive structures and selection mechanisms.[61] Procter & Gamble's Connect + Develop program, initiated in 2000, exemplifies open innovation through crowdsourcing by partnering with external entities including individuals, startups, and research institutions to co-develop products. The initiative has resulted in over 1,000 active collaboration agreements, more than doubling P&G's innovation success rate while reducing R&D spending as a percentage of sales from 4.8% to lower levels through decreased internal invention reliance. This shift sourced approximately 35% of innovations externally by the mid-2000s, enabling breakthroughs in consumer goods like Swiffer and Febreze variants via crowdsourced problem-solving.[90][91] LEGO Ideas, launched in 2008, allows fans to submit and vote on product concepts, with designs reaching 10,000 supporters advancing to review by LEGO's development team for potential commercialization. This platform has produced sets like the NASA Apollo Saturn V and Central Perk from Friends, contributing to LEGO's revenue growth to $9.5 billion in 2022, a 17% increase partly attributed to crowdsourced hits that reduced development timelines by up to fourfold compared to traditional processes. By 2023, over 49 ideas had qualified for review in a four-month span, demonstrating scalable idea validation through user engagement.[92][93] Platforms like InnoCentive facilitate product innovation by hosting prize-based challenges for technical solutions, achieving an 80% success rate across over 2,500 solved problems since 2000 and generating 200,000 innovations. In business contexts, this has supported advancements in materials and processes, with 70% of solutions often originating from solvers outside the seeker's field, enhancing novelty and cost-efficiency. Threadless, operational since 2000, crowdsources apparel designs via community scoring, printing top-voted submissions and awarding creators $2,000 or more, which has sustained a marketplace model by minimizing inventory risks through demand-driven production.[67][94][95]Scientific and Technical Research
Crowdsourcing in scientific research primarily leverages distributed human intelligence for tasks such as pattern recognition, data annotation, and iterative problem-solving, where automated algorithms struggle with ambiguity or novelty. Platforms enable non-experts to contribute via gamified interfaces or simple classification tools, processing vast datasets that would otherwise overwhelm individual researchers or labs. This approach has yielded empirical successes in fields like astronomy and biochemistry, with verifiable outputs including peer-reviewed structures and classifications validated against professional benchmarks.[69][96] In structural biology, the Foldit platform, developed in 2008 by researchers at the University of Washington, crowdsources protein folding puzzles through a competitive gaming interface. Players manipulate three-dimensional protein models to minimize energy states, drawing on intuitive spatial reasoning. A landmark achievement occurred in 2011 when Foldit participants generated accurate models of a monomeric retroviral protease from the Mason-Pfizer monkey virus, enabling molecular replacement and crystal structure determination—a problem unsolved by computational methods despite over 10 years of effort. The resulting structure, resolved at 1.6 Å resolution, revealed a novel fold distinct from dimeric homologs, aiding insights into retroviral maturation.[96] This success stemmed from players devising new algorithmic strategies during gameplay, which were later formalized into software improvements. Extending this, a 2019 study involved 146 Foldit designs encoded as synthetic genes; 56 expressed soluble, monomeric proteins in E. coli, adopting 20 distinct folds—including one unprecedented in nature—with high-resolution validations matching player predictions (Cα-RMSD 0.9–1.7 Å). These outcomes underscore crowdsourcing's capacity for de novo design, where human creativity addresses local strain issues overlooked by physics-based simulations.[97] Astronomy has seen extensive application through citizen science, notably Galaxy Zoo, launched in 2007 to classify galaxies from the Sloan Digital Sky Survey. Over 150,000 volunteers delivered more than 50 million classifications in the first year alone, with subsequent iterations like Galaxy Zoo 2 adding 60 million in 14 months; these match expert reliability and have fueled over 650 peer-reviewed publications. Key discoveries include "green pea" galaxies—compact, high-redshift objects indicating rapid star formation—and barred structures in distant galaxies, challenging models of cosmic evolution and securing follow-up observations from telescopes like Hubble and Chandra. The broader Zooniverse platform, encompassing Galaxy Zoo, facilitated the 2018 detection of a five-planet exoplanet system via the Exoplanet Explorers project, where volunteers analyzed Kepler light curves to identify transit signals missed by initial algorithms.[69][98] Such efforts demonstrate scalability, with crowds processing petabytes of imaging data to reveal serendipitous patterns, though outputs require statistical debiasing to mitigate volunteer inconsistencies.[69] In technical research domains like distributed computing and data validation, crowdsourcing supports hybrid human-machine workflows, as in Zooniverse's Milky Way Project, where annotations of infrared bubbles advanced star-formation models. Empirical metrics show crowds achieving 80-90% agreement with experts on visual tasks, accelerating hypothesis testing by orders of magnitude compared to solo efforts. However, success hinges on task decomposition and incentive alignment, with gamification boosting retention but not guaranteeing domain-generalizable insights.[99] These applications highlight causal advantages in harnessing collective intuition for ill-posed problems, though integration with computational verification remains essential for rigor.[97]Public Policy and Governance
Governments have increasingly adopted crowdsourcing to solicit public input on policy design, resource allocation, and problem-solving, aiming to leverage collective wisdom for more responsive governance. In the United States, Challenge.gov, launched in 2010 pursuant to the America COMPETES Reauthorization Act, serves as a federal platform where agencies post challenges with monetary prizes to crowdsource solutions for public sector issues, such as disaster response innovations and regulatory improvements; by 2023, it had facilitated over 1,500 challenges with total prizes exceeding $500 million. Similarly, Taiwan's vTaiwan platform, initiated in 2014, employs tools like Pol.is for online deliberation on policy matters, notably contributing to the 2016 Uber regulations through consensus-building among 20,000 participants, which informed legislative drafts and enhanced perceived democratic legitimacy.[100] Notable experiments include Iceland's 2011-2013 constitutional revision, where a 950-member National Forum crowdsourced core principles, followed by a 25-member Constitutional Council incorporating online public submissions from over 39,000 visitors to draft a new document; the proposal garnered 67% approval in a 2012 advisory referendum but failed parliamentary ratification in 2013 amid political opposition and procedural disputes, highlighting implementation barriers despite high engagement.[101][102] Participatory budgeting, blending crowdsourcing with direct democracy, originated in Porto Alegre, Brazil, in 1989 and has expanded digitally in cities like Chicago and Warsaw, where residents propose and vote on budget allocations via apps; evaluations show boosts in participation rates—e.g., Warsaw's 2016-2020 cycles drew over 100,000 votes annually—but uneven outcomes, with funds often favoring visible infrastructure over systemic equity due to self-selection biases among participants.[103][104] During the COVID-19 pandemic, public administrations in Europe and North America used crowdsourcing for targeted responses, such as Italy's 2020 call for mask distribution ideas and the UK's NHS volunteer mobilization platform, which recruited 750,000 participants in days; these efforts yielded practical innovations but revealed limitations in scaling unverified inputs amid crises.[105] Empirical analyses indicate crowdsourcing enhances organizational learning and policy novelty in government settings, with studies across disciplines finding positive correlations to citizen empowerment and legitimacy when platforms ensure moderation, though effectiveness diminishes without mechanisms for representativeness and elite buy-in.[106][62] Failures, like Iceland's, underscore causal risks: crowdsourced outputs often lack binding enforcement, vulnerable to veto by entrenched interests, and may amplify vocal minorities over broader consensus.[107]Other Domains (e.g., Journalism, Healthcare)
In journalism, crowdsourcing facilitates public involvement in data gathering, verification, and investigative processes, often supplementing traditional reporting with distributed expertise. During crises, such as the 2010 Haiti earthquake, journalists integrated crowdsourced social media reports to map events and disseminate verified information, with analyses showing that professional intermediaries enhanced the reliability of volunteer-submitted data by filtering and contextualizing inputs. [108] Early experiments like Off the Bus in 2008 demonstrated viability, where citizen contributors broke national stories for mainstream outlets, though success depended on editorial oversight to mitigate inaccuracies inherent in unvetted submissions. [109] More recent applications include crowdsourced fact-checking, which empirical studies indicate can scale verification efforts effectively when structured with clear protocols, outperforming individual assessments in detecting misinformation across diverse content. [110] In healthcare, crowdsourcing supports medical research by harnessing non-expert input for tasks like annotation, innovation challenges, and real-world data aggregation, shifting from insular expert models to open collaboration. Systematic reviews identify key applications in diagnosis—via crowds annotating images for algorithmic training—surveillance through self-reported symptoms, and drug discovery, where platforms solicit molecular designs from global participants, yielding solutions comparable to specialized labs in cases like protein folding puzzles solved via gamified interfaces. [111] [112] For instance, crowdsourcing has accelerated target identification in pharmacology, with one 2016 initiative at Mount Sinai involving public annotation of genomic datasets to uncover novel drug candidates, demonstrating feasibility despite challenges in data quality control. [113] Quantitative evidence from reviews confirms modest but positive health impacts, such as improved outbreak detection via apps aggregating patient data, though outcomes vary with participant incentives and validation mechanisms to counter biases like self-selection in reporting. [6]Empirical Benefits and Impacts
Economic Efficiency and Innovation Gains
Crowdsourcing improves economic efficiency by distributing tasks to a large, distributed workforce, often at lower marginal costs than maintaining specialized internal teams. Platforms facilitate access to global talent without fixed employment overheads, enabling transaction cost reductions through efficient matching and on-demand participation. Empirical analyses of crowdsourcing marketplaces highlight strengths in labor accessibility and cost-effectiveness, as tasks are completed via competitive bidding or fixed prizes rather than salaried positions.[114] In prize-based systems like InnoCentive, seekers post R&D challenges with bounties that typically yield solutions at fractions of internal development expenses. A 2009 Forrester Consulting study of InnoCentive's model found an average 74% return on investment, driven by accelerated problem-solving and avoidance of sunk costs in unsuccessful internal trials. Similarly, government applications have reported up to 182% ROI with payback periods under two months, alongside multimillion-dollar productivity gains over multi-year horizons.[116][117] Crowdsourcing drives innovation gains by harnessing heterogeneous knowledge inputs, surpassing the limitations of siloed expertise. Diverse participant pools generate novel solutions through parallel ideation, with reviews confirming enhanced accuracy, scalability, and boundary-transcending outcomes in research tasks. Organizational studies demonstrate positive causal links to learning at individual, group, and firm levels, fostering feed-forward innovation processes. In product domains, such as Threadless's design contests, community-sourced ideas reduce time-to-market by validating demand via votes before production, yielding higher hit rates than traditional forecasting.[17][62][118]Scalability and Diversity Advantages
Crowdsourcing enables the distribution of complex tasks across vast participant pools, facilitating scalability beyond the constraints of traditional teams or organizations. Platforms such as Amazon Mechanical Turk allow for rapid engagement of global workers at low costs, with micro-tasks often compensated at rates as low as $0.01, enabling real-time processing of large datasets that would otherwise require prohibitive resources.[17] For example, the Galaxy Zoo project mobilized volunteers to classify nearly 900,000 galaxies, achieving research-scale outputs unattainable by small expert groups and demonstrating how crowds can handle voluminous data in fields like astronomy.[17] This scalability supports expansion or contraction of efforts based on demand, as seen in data annotation for machine learning, where crowds meet surging needs for labeled datasets that outpace internal capacities.[119] The global reach of crowdsourcing inherently incorporates participant diversity in demographics, expertise, and viewpoints, yielding advantages in innovation and comprehensive problem-solving. Diverse teams outperform homogeneous ones in covering multifaceted skills and perspectives, with algorithmic approaches ensuring maximal diversity while fulfilling task requirements, as validated through scalable experimentation.[120] Exposure to diverse knowledge in crowdsourced challenges directly enhances solution innovativeness, evidenced by a regression coefficient of β = 1.19 (p < 0.01) across 3,200 posts from 486 participants in 21 contests, where communicative participation further amplifies serial knowledge integration leading to breakthrough ideas.[121] Similarly, cognitive diversity among crowd reviewers boosts identification of societal impacts from algorithms, with groups of five diverse evaluators averaging 8.7 impact topics versus about 3 from one, underscoring diminishing returns beyond optimal diversity thresholds.[122] These scalability and diversity dynamics combine to drive empirical gains in accuracy and discovery, as diverse crowds have achieved up to 97.7% correctness in collective judgments with large contributor volumes, transcending geographic and institutional boundaries for applications like medical diagnostics.[17] In governmental settings, such approaches foster multi-level learning—individual, group, and organizational—through varied inputs, with structural equation modeling confirming positive effects across crowdsourcing modes like wisdom crowds and voting.[62]Verified Success Metrics and Examples
InnoCentive, a crowdsourcing platform for R&D challenges, has resolved over 2,500 problems with an 80% success rate, delivering more than 200,000 innovations and distributing $60 million in awards to solvers as of June 2025.[123] A Forrester Consulting study commissioned by InnoCentive in 2009 found that its challenge-driven approach yielded a 74% return on investment for participating organizations by accelerating research at lower costs compared to internal efforts.[116] For instance, the Rockefeller Foundation posted 10 challenges between 2006 and 2009, achieving solutions in 80% of cases through diverse solver contributions.[124] In scientific applications, the Foldit online game has enabled non-expert participants to outperform computational algorithms in protein structure prediction and design. Top Foldit players solved challenging refinement problems requiring backbone rearrangements, achieving lower energy states than automated methods in benchmarks published in 2010.[125] By 2011, players independently discovered symmetrization strategies and novel algorithms for tasks like modeling the AIMD monkey virus protease, with successful player-derived recipes rapidly propagating across the community and dominating solutions.[126] A notable 2012 achievement involved crowdsourced redesign of a microbial enzyme to degrade retroviral RNA, providing a potential treatment avenue in just weeks, far faster than expert-only approaches.[127] Business-oriented crowdsourcing, such as Threadless's t-shirt design contests, demonstrates commercial viability through community voting that correlates with revenue generation. Analysis of Threadless data shows that crowd scores predict design sales, with high-voted submissions yielding skewed positive revenue distributions upon production.[128] At its peak, the platform selected about 150 designs annually for printing, sustaining operations by aligning user-generated content with market demand without traditional design teams.[129] Over 13 years to 2013, Threadless distributed $7.12 million in prizes to contributors, reflecting scalable output from voluntary participation.[130]| Platform | Key Metric | Achievement |
|---|---|---|
| InnoCentive | 80% challenge success rate | 2,500+ solutions, $60M awards (2025)[123] |
| Foldit | Superior algorithm discovery | Novel protein redesigns in weeks vs. years[126] |
| Threadless | Vote-revenue correlation | 150 annual designs, $7.12M payouts (to 2013)[129][130] |
Challenges and Criticisms
Quality Control and Output Reliability
Crowdsourced outputs frequently suffer from inconsistencies arising from heterogeneous worker abilities, varying effort levels, and misaligned incentives, such as rapid completion for monetary rewards leading to spam or superficial responses. In microtask platforms like Amazon Mechanical Turk, worker error rates can exceed 20-30% in unsupervised settings for classification tasks without intervention, as heterogeneous skills amplify variance in responses.[131] Open-ended tasks exacerbate this, where subjective interpretations yield multiple valid answers but low inter-worker agreement, often below 70% due to contextual dependencies and lack of standardized evaluation. Quality assurance mechanisms address these through worker screening via qualification tests or "gold standard" tasks with known answers to filter unreliable participants, achieving initial rejection rates of low-skill workers up to 40%. Redundancy assigns identical tasks to 3-10 workers, aggregating via majority voting or advanced models like Dawid-Skene, which jointly estimate per-worker reliability and ground truth probabilities; these have demonstrated accuracy improvements from 60% baseline to over 85% in binary labeling experiments on platforms like MTurk. Reputation systems further refine assignments by weighting past performance, with empirical tests showing sustained reliability gains in repeated tasks, though they falter against adversarial spamming.[131][132] Despite these, reliability remains task-dependent: closed-ended queries rival or exceed single-expert accuracy in aggregate (e.g., crowds outperforming individuals in skin lesion diagnosis via ensemble judgments), but open-ended outputs lag, with surveys noting persistent challenges in aggregation for creative or interpretive work due to irreducible disagreement. Peer review and expert validation hybrid approaches boost metrics, as in Visual Genome annotations where crowd-expert loops yielded dense, verifiable datasets, yet scaling incurs costs 2-5 times higher than pure crowds. Empirical meta-analyses confirm that while redundancy ensures statistical robustness for verifiable tasks, unaddressed biases—like demographic skews in worker pools—can propagate systematic errors, underscoring the need for domain-specific tuning over generic optimism in platform claims.[131][133]Participation and Incentive Failures
Crowdsourcing initiatives frequently encounter low participation rates, with empirical analyses indicating that 90% of organizations soliciting external ideas receive fewer than one submission per month.[134] This scarcity arises from inadequate crowd mobilization, as organizations often fail to adapt traditional hierarchical sourcing models to the decentralized nature of crowds, neglecting sequential engagement stages such as task definition, submission, evaluation, and feedback. High dropout rates exacerbate the issue; on platforms like Amazon Mechanical Turk, dropout levels range from 20% to 30% in research tasks, even with monetary incentives and remedial measures like prewarnings or appeals to conscience, compared to lower rates in controlled lab settings.[135] These dropouts result in incomplete data and wasted resources, as partial compensation for non-completers risks further incentivizing withdrawals without yielding usable outputs.[135] Incentive structures often misalign contributor motivations with organizational goals, fostering free-riding where participants exert minimal effort, anticipating acceptance of low-quality inputs amid high submission volumes. Winner-take-all prize models, common in innovation contests, skew participation toward high-risk strategies, rendering second-place efforts valueless and discouraging broad involvement. Lack of feedback compounds this, with 88% of crowdsourcing organizations providing none to contributors, eroding trust and repeat engagement.[134] In open platforms, free-riders responsive to selective incentives can improve overall quality by countering overly optimistic peer ratings, but unchecked, they dilute collective outputs.[136] Empirical cases illustrate these failures: Quirky, a crowdsourced product development firm, raised $185 million but collapsed in 2015 due to insufficient sustained participation and limited appeal of crowd-generated ideas. Similarly, BP's post-Deepwater Horizon solicitation yielded 100,000 ideas in 2010 but produced no actionable solutions, attributable to poor incentive alignment and rejection of crowd-favored submissions, which provoked backlash and disengagement.[134] In complex task crowdsourcing, such as technical problem-solving, actor-specific misalignments—between contributors seeking recognition and platforms prioritizing volume—lead to fragmented efforts and outright initiative failures.[8]Ethical Concerns and Labor Dynamics
Crowdsourcing platforms, particularly those involving microtasks like data labeling and content moderation, have raised ethical concerns over worker exploitation due to systematically low compensation that often falls below living wages in high-cost regions. A meta-analysis of crowdworking remuneration revealed that microtasks typically generate an hourly wage under $6, significantly lower than comparable freelance rates, exacerbating precarity for participants reliant on such income.[137] This disparity stems from global labor arbitrage, where tasks are outsourced to workers in low-wage economies, but platforms headquartered in wealthier nations capture disproportionate value without providing benefits like health insurance or overtime pay.[138] Critics argue this model undermines traditional labor regulations by classifying workers as independent contractors, evading responsibilities for minimum wage enforcement or workplace safety.[139] Labor dynamics in these ecosystems reflect power imbalances, with platforms exerting unilateral control via algorithms that assign tasks, evaluate outputs, and reject submissions without appeal, fostering worker alienation and dependency. On Amazon Mechanical Turk, for instance, automated systems commodify human effort into piece-rate payments, where requesters can impose subjective quality standards leading to unpaid revisions or bans, reducing effective earnings further.[140] Workers, often from demographics including students, immigrants, and those in developing countries, exhibit high platform dependence due to barriers to entry on alternatives and the lack of portable reputation systems, mirroring monopolistic structures that limit mobility.[141] Empirical studies highlight how such dynamics perpetuate racialized and gendered exploitation, with tasks disproportionately assigned to underrepresented groups under opaque criteria, though platforms maintain these practices enable scalability at low cost.[142] Additional ethical issues encompass inadequate informed consent and privacy risks, as workers may unknowingly handle sensitive data—such as moderating violent content—without psychological support or clear disclosure of task implications. Peer-reviewed analyses emphasize the need for codes of conduct addressing intellectual property rights, where contributors relinquish ownership of outputs for minimal reward, potentially enabling uncompensated innovation capture by corporations.[143] While proponents view crowdsourcing as democratizing access to work, evidence from worker surveys indicates persistent failures in fair treatment, including scam proliferation mimicking legitimate tasks, which eroded trust and income stability by 2024.[144] Reforms like transparent payment algorithms and minimum pay floors have been proposed in academic literature, but adoption remains limited, sustaining debates over whether crowdsourcing constitutes a modern exploitation framework or a viable supplemental income source.[145][146]Regulatory and Structural Limitations
Crowdsourcing platforms face significant regulatory hurdles stemming from the application of existing labor, intellectual property, and data privacy laws, which were not designed for distributed, on-demand workforces. In the United States, workers on platforms like Amazon Mechanical Turk are classified as independent contractors under the Fair Labor Standards Act, exempting requesters from providing minimum wages, overtime, or benefits, though this has sparked misclassification lawsuits alleging violations of wage protections. For instance, in 2017, crowdsourcing provider CrowdFlower settled a class-action suit for $585,507 over claims that workers were improperly denied employee status and fair compensation. Similar disputes persist, as platforms leverage contractor status to minimize liabilities, but courts increasingly scrutinize control exerted via algorithms and task specifications, potentially reclassifying workers as employees in jurisdictions with gig economy precedents.[147][148][149] Intellectual property regulations add complexity, as crowdsourced contributions often involve creative or inventive outputs without clear ownership chains. Contributors typically agree to broad licenses granting platforms perpetual rights, but this exposes organizers to infringement risks if submissions unknowingly replicate third-party IP, and disputes arise over moral rights or attribution in jurisdictions like the EU. Unlike traditional employment, where works-for-hire doctrines assign ownership to employers, crowdsourcing lacks standardized contracts, leading to potential invalidations if terms fail to specify joint authorship or waivers adequately.[150][151][152] Data privacy laws impose further constraints, particularly for tasks handling personal information. Platforms must adhere to the EU's General Data Protection Regulation (GDPR), which mandates explicit consent, data minimization, and breach notifications, complicating anonymous task routing and exposing non-compliant operators to fines up to 4% of global revenue. In California, the Consumer Privacy Act (CCPA) requires opt-out rights for data sales, challenging platforms that aggregate worker profiles for quality scoring. Crowdsourcing's decentralized nature amplifies risks of de-anonymization or unauthorized data sharing, with studies highlighting persistent gaps in worker privacy protections despite regulatory mandates.[153][154] Structurally, crowdsourcing encounters inherent limits in coordination and scalability for complex endeavors, as ad-hoc participant aggregation lacks the hierarchical oversight of firms, fostering free-riding and suboptimal task division. Research indicates that predefined workflows enhance coordination but stifle adaptation to emergent issues, increasing overhead as crowd size grows beyond simple microtasks. Scalability falters in quality assurance, where untrained workers yield inconsistent outputs—evident in data annotation where error rates rise without domain expertise, limiting viability for high-stakes applications like AI training. These constraints stem from crowds' flat organization, which undermines incentive alignment and knowledge integration compared to bounded teams, often resulting in project failures for non-routine problems.[134][155][156][157]Recent Developments and Future Outlook
Technological Integrations (AI, Blockchain)
Artificial intelligence has been integrated into crowdsourcing platforms to automate task allocation, enhance quality control, and filter unreliable contributions, addressing limitations in human-only systems. For instance, AI algorithms analyze worker performance history and task requirements to match participants more effectively, reducing errors and improving efficiency in data annotation projects.[158] In disaster management, AI-enhanced crowdsourcing systems process real-time user-submitted data for faster emergency response, as demonstrated in a 2025 systematic review evaluating frameworks that combine machine learning with crowd inputs for predictive analytics.[159] Additionally, crowdsourcing serves as a data source for training AI models, with platforms distributing microtasks to global workers for labeling datasets, enabling scalable development of robust machine learning systems as seen in initiatives by organizations leveraging diverse human inputs for AI refinement.[160] Blockchain technology introduces decentralization and transparency to crowdsourcing, mitigating issues like intermediary trust and payment disputes through smart contracts that automate rewards upon task verification. Platforms such as LaborX employ blockchain to facilitate freelance task completion with cryptocurrency payouts, eliminating centralized gatekeepers and enabling borderless participation since its implementation.[161] Frameworks like TFCrowd, proposed in 2021 and built on blockchain, ensure trustworthiness by using consensus mechanisms to validate contributions and prevent free-riding, with subsequent adaptations incorporating zero-knowledge proofs for privacy-preserving task execution.[162] The zkCrowd platform, a hybrid blockchain system, balances transaction privacy with auditability in distributed crowdsourcing, supporting applications in human intelligence tasks where data integrity is paramount.[163] Integrations of AI and blockchain in crowdsourcing amplify these benefits by combining intelligent automation with immutable ledgers; for example, AI can pre-process crowd data before blockchain verification, enhancing security in decentralized networks.[161] In the World Bank's Real-Time Prices platform, launched prior to 2025, AI aggregates crowdsourced food price data across low- and middle-income countries, with blockchain potential for tamper-proof logging to further bolster reliability in economic monitoring.[164] These advancements, evident in peer-reviewed schemes from 2023 onward, promote fairness by penalizing false reporting via cryptographic incentives, though scalability remains constrained by computational overhead in on-chain validations.[165]Market Growth and Quantitative Trends (2023-2025)
The global crowdsourcing market exhibited robust growth from 2023 to 2025, reaching an estimated value of USD 50.8 billion in 2024, fueled by expanded digital infrastructure, remote collaboration tools, and corporate adoption for tasks ranging from data annotation to innovation challenges.[166] Forecasts indicate a compound annual growth rate (CAGR) exceeding 36% from 2025 onward, reflecting surging demand amid economic shifts toward flexible, on-demand labor models.[166] In the crowdsourced testing segment, critical for quality assurance in software and applications, the market advanced to USD 3.18 billion in 2024, with projections for USD 3.52 billion in 2025, corresponding to a 10.7% year-over-year increase and an anticipated CAGR of 12.2% through 2030.[167] This expansion correlates with rising complexity in mobile and web deployments, where distributed testers provide diverse device coverage unattainable through traditional in-house teams.[167] Crowdfunding, a major crowdsourcing application for capital raising, grew from USD 19.86 billion in 2023 to USD 24.05 billion in 2024, projected to hit USD 28.44 billion in 2025, yielding a CAGR of approximately 19% over the period.[168][169] These figures underscore investor enthusiasm for equity, reward, and donation-based models, particularly in startups and social causes, though estimates vary across reports due to differing inclusions of blockchain-integrated platforms.[169] Crowdsourcing software and platforms, enabling task distribution and management, were valued at USD 8.3 billion in 2023, with segment-specific CAGRs of 12-15% driving incremental revenue through 2025 amid integrations with AI for task automation.[170] Microtask crowdsourcing, focused on granular data processing, expanded from USD 283 million in 2021 to a forecasted USD 515 million by 2025, at a 16.1% CAGR, highlighting niche efficiency gains in AI training datasets.[171] Collectively, these trends signal a market maturing beyond hype, with verifiable revenue acceleration tied to verifiable cost reductions—up to 40% in testing cycles—and scalability in global participant pools exceeding millions annually.[167]Emerging Risks and Opportunities
One emerging risk in crowdsourcing involves the amplification of misinformation through community-driven moderation systems, where crowd-sourced annotations or notes can inadvertently propagate unverified claims despite mechanisms like upvoting or flagging. For instance, a 2024 study on X's Community Notes found that unhelpful notes—those deemed low-quality by crowd consensus—exhibited higher readability and neutrality, potentially increasing their visibility and influence on users compared to more accurate but complex helpful notes.[172] Similarly, platforms shifting to crowdsourced fact-checking, such as Meta's 2025 pivot toward community moderation, risk elevated exposure to false content without professional oversight, as non-expert crowds may prioritize consensus over empirical verification.[173] This vulnerability stems from crowds' susceptibility to groupthink and echo chambers, particularly in high-stakes domains like health or elections, where collaborative groups outperformed individuals in detection but still faltered against sophisticated disinformation.[174] Privacy and data security pose another escalating concern, especially in crowdsourced data annotation for AI training, where tasks involving sensitive information are distributed to anonymous workers, heightening breach risks. A 2024 analysis highlighted that exposing critical datasets to broad worker pools without robust controls can lead to unauthorized access or leaks, as seen in platforms where task publication bypasses stringent vetting.[175] Compliance with regulations like GDPR becomes challenging amid these distributed workflows, with real-time monitoring systems proposed as mitigations but not yet widely adopted by mid-2025.[176] In cybersecurity contexts, crowdsourced vulnerability hunting introduces hybrid threats, where malicious actors exploit open calls to probe systems under the guise of ethical testing.[177] Opportunities arise from hybrid integrations with AI and blockchain, enabling more scalable and verifiable crowdsourcing models. AI-augmented systems, projected to streamline workflows by 2030, allow crowds to handle complex tasks like synthetic media verification, where human oversight complements machine learning to filter deepfakes more effectively than pure automation.[178] Blockchain facilitates decentralized incentive structures, reducing fraud via transparent ledgers for contributions, as evidenced by emerging platforms combining it with crowdsourcing for secure data provenance in AI datasets since 2023.[161] In cyber defense, crowdsourced threat intelligence sharing—while privacy-protected—has gained traction, with 2025 frameworks emphasizing Traffic Light Protocols to enable rapid, collective responses to attacks without full disclosure.[179] These advancements could expand crowdsourcing into national security applications, leveraging diverse global inputs for real-time hybrid threat mitigation.[177]References
- https://www.[investopedia](/page/Investopedia).com/terms/c/crowdsourcing.asp