By Shalom Jaffe, Rachel Hartman & Leib Litman, PhD
Writing academic papers is hard—you need to explain what motivated your study, what question you tried to answer, how you tested your ideas, what you found, and what it all means. Inevitably, you’ll send your paper out for review, wait months for the response, and receive comments that may or may not look kindly on the work you did.
Before you get to that point, however, you have to find a way to begin writing. Many people like to start by drafting the methods and results sections. Because these sections are relatively straightforward and provide a description of what was already done, it’s an easier way to notch a “win” before tackling the introduction and discussion.
But easier still isn’t easy! To help anyone starting a methods section, we’ve put together this blog highlighting the types of samples you can gather from CloudResearch and how you might accurately describe them within your paper. Hopefully, a quick read of this blog gets you ready to start writing.
You can access participants from three sources when using CloudResearch: Connect, Mechanical Turk (MTurk) and Prime Panels.
Connect is CloudResearch’s premier platform for online participant recruitment. Unlike our other products, participants on Connect are sourced directly by CloudResearch, without any third party, to ensure only people who provide high-quality data participate in your tasks. To learn more about Connect see our FAQ.
You can describe samples on Connect just as you would any other source of online participants. Make sure to include any settings you may have used to target your sample and information about the study’s length and payment. Here’s a fictional example:
This next example is from a paper by Moss et al. (2023), published in the Journal of Experimental Social Psychology:
For additional articles using Connect, see here, here, here, here, here, and here.
MTurk is a microtask platform that connects “requesters” (people who need tasks completed) with “workers” (people willing to do the tasks). Academics have been using MTurk for research since about 2010.
CloudResearch connects to MTurk through API integration. With this integration, researchers can use CloudResearch to set up and manage studies but still draw participants directly from MTurk. The suite of tools that CloudResearch makes available for using MTurk is called the MTurk Toolkit. The Toolkit is built on top of the Mechanical Turk platform.
With CloudResearch’s MTurk Toolkit, there are two ways to sample from MTurk. Your study may be:
How you describe your MTurk sample in your Methods section is important. Like with any other sample, there are some details you should be sure to describe and some you can safely omit. The examples below illustrate what an accurate sample description may look like.
This example is for a fictional study but it contains all the important elements of a sample description. It reports the important demographic information of participants, describes the sampling procedure and sampling settings, and it tells readers when the data were collected and how participants were compensated.
The second example is from a paper by Gratz et al. (2020) published in Suicide and Life‐Threatening Behavior.
Give some background about MTurk: Even though online participant recruitment is becoming the new norm, some reviewers are unfamiliar with the literature that documents the rise of online research. If you’re looking for sources to support your use of MTurk or approach to sampling, our book is a comprehensive guide to online sampling and data collection, with a focus on MTurk. You might also consult our papers describing CloudResearch’s MTurk Toolkit and how to sample naive people on MTurk.
For a review of common concerns and the evidence that does or doesn’t exist to support these concerns, see Hauser et al. (2019).
Mention if you used CloudResearch’s Approved Participants: If you use our Approved Participants to gather data you should mention it, because this is a group of participants that can only be accessed via CloudResearch. The Approved Participants group vastly improves data quality over open MTurk samples, consists largely of naïve participants, and has overall demographics that are similar to the broader MTurk population. See here for more details.
Don’t confuse CloudResearch with Mechanical Turk: CloudResearch and Amazon Mechanical Turk are independent companies. As described above, CloudResearch’s MTurk Toolkit sits on top of MTurk and enables researchers to run flexible studies while drawing participants from MTurk.
Don’t confuse panels with MTurk: CloudResearch profiles participants on MTurk by gathering voluntary demographic data. A group of people who meet specific demographic criteria is sometimes called “a panel.” A panel of MTurk participants, however, is not the same as sampling from Prime Panels (see more below). The language here can get confusing, so it’s best to just describe specific demographic filters that you used.
Prime Panels is a participant recruitment platform that aggregates several market research panels integrated with the highest quality control system in the industry, Sentry®. Prime Panels offers access to tens of millions of people worldwide and is especially useful for gathering samples that are more representative of the US population (similar to Qualtrics Panels, Dynata, or Lucid) or not available on microtask sites like MTurk. For example, with Prime Panels it is easy to gather data from participants matched to the US Census those within specific US regions, or those in minority or hard-to-reach groups.
Here are two examples of how to accurately describe participants recruited from Prime Panels in a Methods section. The first example is, again, a fictional paper sampling adults across several age ranges.
The second example is from a paper by Kroshus et al. (2020) published in JAMA Pediatrics:
Give background on Prime Panels: If MTurk still sounds like a novel way to collect data to some reviewers, market research panels are even newer. Therefore, it is important to provide some background about market research panels and how they are being used in academic research. Good sources for this information are the tenth chapter of our book and Chandler et al., 2019.
Don’t report the cost per participant as compensation: Compensation details are much more complicated for market research panels than for MTurk. You can find more information on how participants are compensated here. Reporting the cost per participant as the compensation participants receive is not accurate.
It’s always rewarding to see others find our resources useful. As a token of our appreciation, when you publish an article using CloudResearch samples and cite us, you’ll be eligible for a $10 lab credit toward your next study! To qualify for this offer:
When using our Connect platform:
Hartman, R., Moss, A. J., Jaffe, S. N., Rosenzweig, C., Litman, L., & Robinson, J. (2023). Introducing Connect by CloudResearch: Advancing Online Participant Recruitment in the Digital Age. https://doi.org/10.31234/osf.io/ksgyr. Retrieved [Date].
When using our MTurk Toolkit, you can cite any of our publications about our tools, but these two tend to be the most useful.
Litman, L., Robinson, J., & Abberbock, T. (2017). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433-442. https://doi.org/10.3758/s13428-016-0727-z
Hauser, D.J., Moss, A.J., Rosenzweig, C. et al. Evaluating CloudResearch’s Approved Group as a solution for problematic data quality on MTurk. Behav Res (2022). https://doi.org/10.3758/s13428-022-01999-x
When using Prime Panels:
Chandler, J., Rosenzweig, C., Moss, A. J., Robinson, J., & Litman, L. (2019). Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behavior Research Methods, 51(5), 2022-2038. https://doi.org/10.3758/s13428-019-01273-7
.