2417
A pragmatic survey-based approach to message testing: learnings from WellSpring and PowerUp with Plants

Jennifer Dinh, MPH1, Maren Henderson, MPP2, Jeanette Ziegenfuss, PhD2, Meghan JaKa, PhD2, Laura Jacobson, MPH2, Thomas Kottke, MD, MSPH3, Hikaru Peterson, PhD4, Katy Ellefson, RD, CD3, Stephanie Kovarik, RDN, LD3, Andrea Anderson, MPH3 and Marna Canterbury, MS, RD3, (1)HealthPartners Institute, Bloomington, MN, (2)HealthPartners Institute, (3)HealthPartners, Bloomington, MN, (4)University of Minnesota, St Paul, MN

Theoretical Background and research questions/hypothesis: Well-crafted health messages are useless if they do not engage the audience. For two different health and well-being initiatives, our integrated health system’s community health team wanted to understand which headlines piqued individuals’ interest and led them to want to learn more about the topic. Grounded in the Transtheoretical Model (i.e., stages of change), we designed pragmatic survey-based evaluations to test message performance for these two initiatives.

Methods: For each initiative, we fielded web and/or paper surveys with up to 24 drafted messages nested within up to 4 constructs to both convenience and random samples. Messages were designed to resonate with people in the contemplation, preparation, and action stages of change. Using this pretesting approach, we asked respondents, for each message, to “imagine that the following statements are headlines. How unlikely or likely are you to watch, listen, or read more about the information that might follow each?” on a four point scale from “very unlikely” to “very likely”. If individuals selected “very unlikely” we asked them to explain why. Message order was randomized within each construct-specific matrix. During analysis, each response option was assigned a value. For each message, the average value of all responses was calculated, and t-tests were used to determine which messages performed better or worse than the midpoint. Message scores within a construct were summed and averaged to identify which constructs performed better.

Results: For one initiative all messages performed better than the neutral midpoint (H0). For the other, seven messages performed better than the midpoint and one performed worse. There was little variation in construct scores in either initiative.

Conclusions: We used a pragmatic approach to test and identify preferred (and non-preferred) messages via self-report. Many messages performed similarly however, so future work using this approach should aim for more variation (e.g., offering more response options). Findings from these surveys were used to create a “pocket guide” with well-being tips for one initiative, and to develop an educational website and a second round of message testing for the other initiative.

Implications for research and/or practice: This is a pragmatic, low-cost approach for message testing when objective measurement (e.g. measuring ad clicks) isn’t an option. This approach allowed us to capture relatively quick feedback from a local population that mirrors the population of focus for our messaging campaigns.