The most valuable asset of any company is brand reputation. Understanding and protecting the health of that brand as it relates to awareness, loyalty, preferences and more, over time, and in real time, is critical.
One very important way companies evaluate their brand health is through brand tracking surveys. In fact, most brands have years of data and intentionally make little or no change to a brand tracker structure in order to get the best “apples to apples” comparison of brand health over the years. The goal in keeping as much as possible the same is to keep the focus on what is actually changing over time.
What does change over time is who is responding to the brand tracking survey. These surveys often draw respondents from sample providers either directly or indirectly through a market research vendor. Although the common assumption is that respondents provided through an established well-regarded sample supplier will provide quality responses, most entities in need of legitimate survey results are often unpleasantly surprised or worse – they don’t even know that fraudulent respondents, inattentive survey takers or bots are taking their survey and skewing the data quality.
As brand tracker data is used to gauge and report on the health of the brand, new internal initiatives are either created or neglected based on the findings.
For example, if the data revealed that 90% of Hispanic respondents could not recognize the brand, a new initiative might be created to relaunch products to this demographic. Depending on the size of the company, this could be a small or multi-million dollar undertaking. Likewise, when data does not reveal actions to be taken, none are.
But what if that data is bogus or at the very least, incredibly flawed? When companies are looking to brand trackers for a true measure of brand health, the level of certainty about the quality of responses is critical for business impact.
To understand the gravity of the situation, let’s take a real-life example. On June 12, 2020, the CDC released a report titled “Knowledge and Practices Regarding Safe Household Cleaning and Disinfection for COVID-19 Prevention – United States, May 2020.” The results from this report were alarming, claiming that, “39% of Americans engaged in at least one high-risk behavior during the previous month.” These included people who used bleach to wash food items (19%), used household cleaning products on their skin (18%), and people who drank or gargled diluted bleach or another disinfectant (4%).
When our team replicated this study and applied serious quality controls to respondent qualification and in-survey behavior assessment, the results were dramatically different. While 39% of respondents in the CDC study did indicate that they were drinking bleach or engaging in another high risk behavior, once quality controls were applied, in our replication study, this number was cut in half.
We’ve come to trust the data from the CDC and yet we learn a valuable lesson from this data – that no online survey is exempt from the need of rigorous quality control to ensure the responses are accurate reflections of the opinions and experiences of the desired study participants.
Problematic participants are often inattentive and responding randomly, or attentive, but providing inaccurate responses. When asked questions in a yes/no format, some respondents “yea-say”, or tend to answer in the affirmative, no matter what question is asked. When asked if they eat concrete for breakfast, for example, or if they live in a very low-populated town in Indiana, or if they are currently employed as a petroleum engineer, they just say “yes” because from their survey taking experience they’ve decided “yes” is usually the right answer.
Without data quality safeguards, brand tracker results are also in danger of being infiltrated by random noise, or consistent “yea-saying”. With a significant number of “yea-sayers” taking your brand tracker survey, this will skew results away from true purchase considerations, brand recall or actual buying behavior.
No matter if the data is feeding into a CDC study for public health, or a brand tracker for a private corporation, CloudResearch’s Co-founder & Chief Research Officer, Leib Litman states, “Bogus respondents create illusory effects, opportunity for illusory correlations and can fail to identify real effects. Decisions based on data with bogus respondents lead to false conclusions and costly mistakes. Quality responses are the baseline to all good research.”
There are three distinct ways that bad sample could be ruining your brand tracker:
The trouble of illusory effects can be simply understood as the result of discovering things that are not true.
Let’s create a hypothetical example to understand these issues and consequences.
Let’s say approximately one-half of the U.S. population buys pickles at least once a year. The brand you are investigating, Spicy, is new to the market, and as of last year has only about 2% market share according to sales and secondary data. Still, 25% of respondents in your survey report they buy Spicy brand pickles monthly. It’s unlikely 25% of consumers have even heard of Spicy yet, let alone buy them.
The problem of illusory effects for a brand tracker of Spicy pickles is that poor data quality is delivering a result of a situation that does not truly exist. In fact, with common sense in this case the brands would immediately know that it is not possible for 25% of a representative sample of the population to truly be buying their pickles. Therefore, this brand tracker data is not giving them an accurate read on the increase or decrease in the market they are truly experiencing – making the data irrelevant.
The trouble of illusory correlations can be simply understood as the result of making correlations between things that are not actually correlated, at least not in the way that data is presented.
Let’s continue with our Spicy pickles brand tracker. The brand tracker includes the question about whether or not (yes or no) people have seen your ads on public transportation vehicles, where you have invested 10% of your ad budget. And in reviewing your data, you see that there is a strong correlation between people who say that they’ve seen your ads on public transportation and people who reported that they’ve bought your pickles.
The problem of illusory correlations for a brand tracker is that these results may sway your team to significantly invest more in ads on public transportation. However, the correlation may have been illusory in that it only came about because respondents were merely saying “yes” to many of the questions you ask, when in fact, they don’t even take public transportation. This misinformation has the powerful possibility of wasting ad dollars.
The problem of illusory correlations for a brand tracker is that these results may sway your team to significantly invest more in ads on public transportation. However, the correlation may have been illusory in that it only came about because respondents were merely saying “yes” to many of the questions you ask, when in fact, they don’t even take public transportation. This misinformation has the powerful possibility of wasting ad dollars and skewing survey results.
The trouble of missing real effects can simply be understood as not seeing things that are actually true of your brand.
One thing anyone would love to know from a brand tracker is which identifiable segment of the market is exhibiting the most purchasing behavior. Our Spicy pickles brand tracker may be able to identify that males between the age of 18-30 are indeed buying 2x as many Spicy brand pickles as any other demographic. Unfortunately, because there is so much “noise” and yea-saying across the sample, this demographic doesn’t look much different from the others, and that statistically significant information never rises to the top of the data for analysis. In this case, not excluding respondents who are obviously distracted can mean adding unnecessary noise in the data.
For a brand tracker, even if the respondents are not fraudulent or bots, the simple fact that they may be rushing through a survey or simply agreeing with each question is keeping the data from providing clarity about their true brand perceptions, purchasing logic and actual buying habits. In this case, now as a company you can’t capitalize on the information by targeting this group of younger consumers in your next ad campaign. Losing that opportunity to identify what could advance the brand is squarely a brand tracker loss.
First, you need to make sure your sample provider has effective sampling methods in place to vet respondents. If a survey is constructed to talk to veterinarians, you need a guarantee that the only people entering the survey are indeed, veterinarians. Second, you need to know that there are quality controls on the behavior of the survey taker.
Consumer insights professionals, brand managers and research project managers typically have urgent meetings, data visualization projects to complete and a team to manage. But a seasoned professional will take the extra step and ask to look at the survey’s raw data when things are just not adding up. When this spot-check uncovers incredibly vague or nonsensical answers to many of the open-ends, it’s time to look even deeper.
This type of data debacle is an insights director’s nightmare, but it happens all too often, and panel problems seem to be growing. Have you considered the following:
Having the highest level of confidence in the data informing brand health measurements is key to business success. Whether locally, regionally or globally, brands rely on highly engaged respondents to inform their next business step. Savvy insights professionals know how to ensure their sample quality is not compromised by bad data, but instead delivers the true voice of the consumer for greater business impact.