How Invalid and Mischievous Survey Responses Bias Estimates Between Groups
In this talk, Dr. Cimpian discusses how mischievous responders—and invalid responses, more generally—can perpetuate narratives of heightened risk, rather than those of greater resilience in the face of obstacles, for LGBQ youth. The talk reviews several recent and ongoing studies using pre-registration and replication to test how invalid data affect LGBQ-heterosexual disparities on a wide range of outcomes.
Online data collection continues to offer great benefits to researchers, but there is a pressing need for validated methods to ensure high data quality across platforms. In this session, Chris Berry and Jeremy Kees provide an overview of data quality on MTurk and professional panel samples, Joseph Goodman discusses the way online data collection is perceived among marketing researchers, and Efrain Ribeiro offers a deep dive into the effects of sampling automation on respondent fraud.
Chris Berry
Assistant Professor of Marketing, Colorado State University
Jeremy Kees
The Richard J. and Barbara Naclerio Endowed Chair, Business Villanova University
This talk focuses on differences in results, data quality, and the underlying mechanisms impacting data quality across multiple online sources.
Joseph Goodman
Associate Professor of Marketing, The Ohio State University’s Fisher College of Business
MTurk and Online Panel Research: Contemporary Developments from Marketing Academia
This talk discusses the effects of COVID-19 on the extent to which academic researchers use online panels, and on the workers participating on certain online panels.
Efrain Ribeiro
Advisor to Zinklar, CASE member, and Independent Online Sampling Consultant
The Unintended Consequences of Research Automation
This talk examines the effects of sampling automation, such as the use of routers and APIs, on respondent fraud and overuse in the context of consumer research.
Aaron Moss gives an in-depth tutorial on CloudResearch’s new platform, Connect, diving into the platform features, and how to best use it to get high quality data.
Aaron Moss
Senior Research Scientist, CloudResearch
This session features the recipients of CloudResearch’s 2022 grant. Each of the speakers was awarded $2,500 to pursue innovative studies online. Aslı Ceren Çınar is generating videos using AI to study sexism and racism in politics, Farnoush Reshadi is analyzing open-ended attention checks with LIWC, Gilad Feldman is running a large-scale open science replication project, and Yefim Roth is comparing online and in-lab participants’ data quality.
Aslı Ceren Çınar
PhD Candidate in Political Science, The London School of Economics and Political Science (LSE)
Asli Ceren Çınar uses computer generated videos assessing how voters’ perceptions of candidates depend on candidate age, gender, race, attractiveness, and vocal pitch–attempting to combat sexism and racism in politics.
Farnoush Reshadi
Assistant Professor of Marketing, Worcester Polytechnic Institute
Farnoush Reshadi works on creating open ended attention checks that can be scored automatically using LIWC’s sentiment analysis.
Gilad Feldman
Assistant Professor, University of Hong Kong
Gilad Feldman is heading an ongoing replication project with a large team of Open Science early career researchers, working to create a more replicable and impactful science.
Yefim Roth
Lecturer, The University of Haifa
Yefim Roth compares the attentiveness of CloudResearch participants to the attentiveness of participants in the lab, helping further elucidate the differences and similarities between the two samples.
In this tutorial, Aaron Moss explains how to use CloudResearch’s Prime Panels, discussing how academic and market researchers can take advantage of the platform to reach various audiences, from representative samples to niche groups.
Aaron Moss
Senior Research Scientist, CloudResearch
The 2021 CloudResearch grant recipients present updates on the projects they have been working on for the past year. Michiel Spape discusses time perception, Art Marsden presents an AI-generated realistic face stimuli database, and Nick Byrd exhibits the Socrates Platform for facilitating reflective cognition.
Michiel Spape
Adjunct Professor in Cognitive Neuroscience, The University of Helsinki
Depression and Sense of Time: On the Critical Relation Between Mood, Motion, and Time
Having recently discovered that time perception can be reliably altered by imagining movement, in this talk Michiel presents evidence demonstrating this effect is enhanced in depression, which may have important implications for diagnosis and treatment.
Art Marsden
Social Psychology PhD Student, Syracuse University
Using AI to Create a Database of Realistic Face Stimuli
Art uses StyleGAN 2 to generate images of realistic looking faces for a free database for researchers, generating different versions of each face to manipulate perceived race/ethnicity.
Nick Byrd
Assistant Professor & Intelligence Community Fellow, Stevens Institute of Technology
Experiments In Reflective Equilibrium Using The Socrates Platform
Nick shares the initial results from Socrates, the online platform we developed to automatically facilitate not only individual essay-based reflection but also interactive chat-based reflection, which yields 3-4 times more net improvement in reflective test performance than individuals’ essay-based reflection.
This session primarily focuses on technological advances that can facilitate online research and lead to more innovative and creativity in research methods and designs. Hiromichi Hagihara uses webcams to for eye-tracking, Matt Lease introduces a novel measure of annotator agreement, Susan Persky uses Virtual Reality to enhance experimental control and generalizability, and Carlos Ochoa digs into people’s willingness to participate in in-the-moment surveys triggered by one’s geolocation.
Hiromichi Hagihara
Research Fellow, International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo
A Video Dataset for the Exploration of Factors Affecting Webcam-Based Automated Gaze Coding
The reduced experimental control in online experiments leads to the interference of factors such as lighting or the distance from a webcam. Hiromichi talks about a video dataset that systematically includes factors that may affect automated gaze coding and its potential to improve data quality.
Matt Lease
Professor at School of Information, University of Texas at Austin
A Better Way to Measure Annotator Agreement
Ensuring the quality of human annotated data is an important precursor to analyzing the phenomena being labeled and training ML models. In this talk, Matt presents a novel measure for annotator agreement that is widely applicable across diverse annotation tasks.
Susan Persky
Director of the Immersive Simulation Program and Head of Health Communication and Behavior Unit, The National Human Genome Research Institute, NIH
Virtual Reality as a Research Site
The growth of virtual reality as a consumer technology underpins its mounting promise as a research environment. This presentation considers key strengths, challenges, and future opportunities associated with conducting research in immersive virtual reality settings.
Carlos Ochoa
Researcher at The Research and Expertise Centre for Survey Methodology, Pompeu Fabra University
Willingness to Participate in In-The-Moment Surveys Triggered by Geolocation Data
Among the research possibilities offered by smartphones, collecting geolocation data holds a prominent position. Carlos discusses the results of a conjoint experiment assessing the willingness to participate in in-the-moment surveys triggered by geolocation data.
Learn about Sentry, the gold standard for data quality protection in online surveys. Sentry uses CloudResearch’s patented technology to employ advanced behavioral assessment alongside technological solutions in the effort to identify and remove low-quality participants before they can enter a survey.
Cheskie Rosenzweig
Senior Research and Product Scientist, CloudResearch
These talks focus primarily on niche samples in online research. Leah Hamilton discusses her use of MTurk to reach public assistance recipients, Spencer Baker focuses on recruiting religious samples through social media, Rachel Hartman presents a method for verifying age online, and Michael Maniaci talks about offering personalized feedback as an incentive.
Leah Hamilton
Associate Professor of Social Work, Appalachian State University
Using Amazon Mturk to Reach Public Assistance Recipients
Leah and colleagues survey current or former recipients of major means-tested assistance programs on MTurk, exploring the effects of these programs on financial planning and goal setting. They then compare these recipients to the welfare state knowledge base to assess the opportunities and challenges of using MTurk to reach such individuals.
Spencer Baker
Graduate Researcher, The University of Tennessee at Chattanooga
Grassroots Sampling of Niche Online Religious Communities
How can social science researchers gather representative samples from diverse religious communities? This talk explores strategies for engaging online religious groups through social media, through the lens of an ongoing project in the psychology of religion.
Rachel Hartman
Research Intern and Conference Organizer, CloudResearch
Do You Know the Wooly Bully? Testing Cultural Knowledge to Verify Participant Age
Online participants sometimes misrepresent themselves, threatening the validity of research. The CloudResearch team created a method for verifying participants’ age: a test of cultural knowledge. Our approach also holds great promise for verifying other identities within online studies.
Michael Maniaci
Associate Professor of Psychology, Florida Atlantic University
Offering Personalized Feedback as a Participation Incentive
This talk discusses practical considerations related to providing individualized feedback about personality or survey responses as a recruitment incentive. Most participants express interest in personalized feedback, although experimental evidence suggests that offering feedback may not noticeably improve data quality.
Elena Brandt
Lead of the Social Science Initiative, Toloka
Sampling Beyond WEIRD: How to Collect Research Data from non-Western Populations
Since the seminal 2010 Henrich et al. paper that revealed a heavy bias of psychological research towards WEIRD populations, social scientists have grown to acknowledge the limitation of only sampling Western participants. Many are willing to go beyond WEIRD; however, international sampling is still associated with high costs and inconvenient timing. In this workshop, our co-sponsor Toloka presents a new tool to quickly, affordably, and responsibly collect data from 80 world countries, reaching far beyond the West and tapping populations of Africa, Asia, Latin America, and Eastern Europe.