There is nothing more crucial to the success of the insights industry than respondent engagement and satisfaction. In the face of abundant evidence linking unhappy respondents with poor data quality, the industry has not done enough to solve the problem of terrible participant experiences. Poor survey design and long non-mobile-friendly surveys are only part of the problem. Modern field techniques are also to blame. We now have data that allow us to spot and mitigate bad experiences in real-time. It’s time we start doing it.

A very high bar

Consumers have more control than ever around their experiences, and this increased level of empowerment permeates every interaction. With virtually unlimited access to technology, content and information, consumers expect excellent experiences. Companies that fail to live up to these expectations quickly lose people’s attention.

Researchers discovered this years ago as online survey participation began to wane. We improved incentives and their fulfillment but it became apparent to many that the problem was the research experience itself and the questionnaire in particular. Attention to the questionnaire rose as mobile device use proliferated. Yet the translation of attention into momentum took ages to merely create momentum and few would argue even now that we have come far enough. We debate format as if it were religious doctrine (Should it be 15 minutes or 20 or 10? What about grids and open-ends?) and largely ignore the impact of modern field practices like router dumping on the respondent experience. Now, with many in the industry openly skeptical (or downright critical) of sample quality, the mandate for improvement is indisputable. Happily, thanks to developments in technology and data collection, we have the tools at our disposal to do just this.

Respondents tell us everything we need to know

We have the ability – in real time – to collect and analyze a vast number of data points that reveal how consumers feel about their research experience. Sampling platforms capture typical indicators of performance like dropout rates to over-quotas, as well as the study parameters like incidence rates and length of interview. We have respondent demographics and profile data and know at what time of day they are answering. We can even capture their ratings of the experience. In short, we have all the data we need to assess whether an experience is good or bad at highly descriptive and predictive levels. Moreover, we can and should let the data drive our conclusions rather than impose simplistic rules. Put differently, if the many data points tell us that, say, a long survey is good or a short survey is bad, then this is what we should believe. But we should not stop there with the “insight.” We should take action.

Deciding to stop bad experiences

Equipped with data, the research process can and should be automated to stop bad experiences in their tracks.

Sound drastic? The reasons in favor are overwhelming.

From a simple rhetorical standpoint, we need to do something! This is an issue that cuts to the core of our industry, one we have discussed – formally in conferences and informally with colleagues – for years. That we are still talking about it means we have not taken sufficiently bold steps to solve the problem; the causes haven’t changed. We cannot afford to forfeit our credibility by not acting, especially now that the tools are available.

From a practical standpoint, using both passive and active to improve respondent experience is entirely consistent with what every marketer and developer is seeking to do. Thus by not doing it (or by imagining that our half-steps are sufficient), we are at a tactical disadvantage. Understanding the respondent journey is more than just a noble pursuit: it is a highly prized skill-set. Researchers need to acquire it. Or, perhaps most appropriately, researchers need to remember that this skill is an inherent part of the discipline. We should be looking at our work as critically and objectively as that of our clients.

Finally, and perhaps most compellingly, our data suggest that the biggest causes of bad experiences are not actually issues related to research design. Rather they are technical or operational problems. Bad redirects, programming issues, quotas that are full but not closed, poorly-implemented automation and router dumping are the principal culprits. We regularly put people into places where we know a bad experience awaits.

And yes, there are research design issues as well. The case against bad questionnaires has been made so often that it has become cliché. It shouldn’t be; it should be propelling urgent action. We have mountains of evidence that bad designs yield bad data. Using the same data points mentioned above, we can identify quality problems very early on in a project. The same automation can also trigger instant alerts to human teams where problems arise.

Let us also acknowledge that there are some very good surveys out there. If we are stopping the bad experiences, then we can just as easily promote the good ones. Studies that have low dropout rates and high ratings can go to the front of the line.

It is in everyone’s interest to do this

Now, you may say to yourself that stopping a study is not a legitimate course of action; a supplier is obliged to finish the client’s project. This is short-sighted.

Given that most problems are technical and operational in nature, the field is effectively compromised anyway. A supplier that spots these issues is actually providing a huge service by helping its clients solve problems while there is still time to do something. The alternative is a worst-case scenario, where clients don’t realize they have had a problem until it’s too late to react and their data are already strange and compromised.

For suppliers, there is no question that this type of approach as part of a broader strategy of full automation and quality controls changes the unit economics of participant recruitment. Put differently, it is possible to offer lower prices and fast turnarounds while IMPROVING respondent validity and engagement and maintaining the integrity of the study.

Conclusion

Respondents provide the vital data our clients need to make important business decisions. The indisputable consequence of a punishing field experience is bad data. It is time we start to act like every other person who is trying to secure people’s valuable time by letting them take control of their own experiences. It’s easier than it sounds: we have indicators that allow them to tell us – implicitly or explicitly – when their survey experience is bad and effectively let them pull that survey out of field. We can and should give them some measure of control.

By not doing this, the industry will continue to pollute the pool of respondents. By doing this at sufficient scale, our data clearly show that automated solutions clean up our industry’s biggest issues to yield greater engagement, better data and improved economics.

We all have a part to play in respondent engagement, even as sample companies. By making a change that gives respondents control, we will see remarkably better field metrics and deliver higher-quality data.


Mathijs de Jong is co-founder and CEO of P2Sample and Points2Shop.com. His work is inspired by the need to fix antiquated industry panel practices by using technology and data-driven solutions that drive more engaged respondents and, ultimately, deliver higher-quality research results. As a developer, Mathijs drives sampling innovation from the inside out. From the creation of proprietary profiling and routing technologies to the development of P2S’s respondent and survey scoring systems, driving both high data quality and unparalleled respondent satisfaction. www.p2sample.com