1065
Tracking Surveys: Leveraging Their Explanatory Power While Avoiding Potential Pitfalls
Tracking Surveys: Leveraging Their Explanatory Power While Avoiding Potential Pitfalls
Theoretical Background and research questions/hypothesis:
For many health communication initiatives, pre-post tracking surveys of the program’s target audience are a vital component of the program’s evaluation framework. Yet some practitioners rarely have the opportunity, time, or expertise to take a step back to interrogate best practices in survey design or interpretive analyses, despite a wealth of “research on research” literature available. The presentation provides observations and recommendations based on Ad Council researchers’ long experience with tracking surveys, focusing on factors such as instrument design, survey mode, sampling procedures, and data analyses and interpretation, be it using survey data only or integrating survey data with other metrics. The core prerogative of any reputable health communication initiative is behavior change. How social marketers measure behavior change is a key consideration when designing an evaluation plan. Government-compiled statistics on public health trends—such as impaired driving fatalities or diabetes rates—can be a valuable indicator, but it is often difficult to tie these large-scale trends to the effects of a particular social marketing intervention. Media analytics measure campaign exposure but provide no clue of the behavioral effects of that exposure. Digital analytics ably measure the number and type of short-term digital engagements that individuals have with campaigns, but they are bad at measuring sustained “offline” behavioral impact. None of these data sources are good at measuring certain intermediate effects, such as awareness or attitudes. This is where tracking surveys come in, for better or for worse. Many social marketers will conduct a tracking survey amongst the target population, establishing benchmark measures prior to an initiative’s launch, and then going back into field at various times after launch to gauge any statistically significant shifts in key indicators, and then endeavor to determine if these shifts are due at least in part to the intervention.Methods:
This presentation is drawn from the author's experience leading the the Ad Council's omnibus continuous campaign tracking survey, which tracks awareness, attitudes and behaviors relating to 24 national US social marketing campaigns on an ongoing basis. It’s a complex, massive study, with more than 490,000 survey responses collected since 2014. Additional background details on study methodology available upon request.Results:
Illustrative examples (charts, tables) drawn from study reports will describe the upsides of such a robust research approach. Illustrative examples will also provide context for basic best practices relating to survey methodology, mode, sampling protocols, and questionnaire design. The presentation will also describe how social bias can infect survey research, due to respondents' social bias while participating in surveys and also due to analysts' biases when interpreting results.Conclusions:
Tracking surveys can provide enormously valuable insight as to how an intervention is “working.” But surveys have their drawbacks. They are increasingly expensive and challenging to implement. They are not particularly sensitive in registering effects when the target audience is very large. They can be rife with social bias. But when well-planned and well-integrated into a broader evaluation framework, they can be invaluable not only to the measurement but also the optimization of a health communication initiative.Implications for research and/or practice:
See Conclusions section.