4 Tips on Great Survey Design
Whether they pop up while perusing an e-commerce site or land in your inbox after your bumpy flight in from Chicago, surveys are used in many different industries to gauge customer satisfaction and glean insight into user motivations. They are a useful tool in the kit of a user experience designer or anyone who is involved with improving the usability of a product.
Surveys seem deceptively easy to create, but the reality is that there is an entire industry and an academic field based on survey design.
So where do we begin?
We spoke to Jean Fox, research psychologist at the Bureau of Labor Statistics (BLS), and come up with this list of four basic things to remember when designing your survey:
1. Beware “Homegrown” Surveys
Although it may be enticing to believe that you can cook up a simple “yes” or “no” survey in five minutes, it’s best to stick with tried and true formats. Without a survey that has been through rigorous testing to make sure that it’s not biased, you’ll have a hard time trusting your results. Are your results telling you something significant about the topic that you are testing or are they more the result of how you designed the survey? You’ll never know. It’s difficult to ask a good question and even more difficult to predict how a participant will interpret that question (here are some things to know before you design and test a survey).
A good example of a survey format that has stood the test of time since its introduction in 1986 is the System Usability Scale (SUS). This post-test survey is composed of a 10-item questionnaire with five response options and has been cited in over 1,200 studies and counting. By taking advantage of existing research and formats such as the SUS, you can ensure that your surveys are methodologically sound.
2. Decide Whether to Use Post-Task vs. Post-Test Surveys
As the name implies, post-task surveys are completed immediately after a usability tester has completed a task, while post-test surveys are given after the end of the entire test. Participants may have more detailed descriptions or feedback in a post-task survey because the task is still fresh in their mind. A post-test survey will garner more general impressions and are more likely to include open-ended questions. In deciding whether to use one or both in your testing, take into consideration your time constraints and what kind of feedback is most important to you.
3. Understand what makes a survey item unipolar or bipolar
Different concepts will lend themselves to different survey designs. There are two types of scales that you should be aware of: bipolar and unipolar, and you’ll see both types of scales out in the world.
A bipolar scale is anchored on either side by opposing concepts, for example “extremely satisfied” to “extremely dissatisfied.”
cust
A unipolar scale represents varying degrees of a concept, from “none” to “a lot” of that concept. There are no natural opposites for this type of scale. If you are measuring how important something is, the scale might be ordered as “not at all important” to “very important.”
Depending on the topic, one approach is likely to be more appropriate than the other. The type of scale you use will influence the terms you use for the scale points.
4. User Test your User Surveys!
Also known in the field of Survey Methodology as “cognitive interviewing,” testing your survey questions with users before it is released into the wild can help to evaluate the design. You want to check that participants are responding in expected ways and that you are able to measure what you set out to measure. Unless you are using a validated survey, cognitive interviewing should be a part of your survey design process.Georgia Gallavin is in her last semester at The New School in New York City, earning an MA in Media Studies. She interned with the User Experience Program at GSA this summer.