May 8, 2020
Along with the development of each new technological platform comes a series of questions designed to understand its ultimate impact on users’ wellbeing or performance. It’s like clockwork.
Does watching too much television rot your child’s brain? How much is too much when it comes to video games? Is our time spent on social media impacting our mental health?
These are all important questions, but how they are asked matters to the ultimate conclusions we can draw. It is well-established that the most commonly used method in this area of research – user self-reports and survey questions – are prone to error. Now, new research from collaborators at Georgia Tech, Facebook, and the University of Michigan have shed light on the nature of error – that is to say whether user over or underestimate their data, who and which questions are more prone to error, and more.
Error in the data, said School of Interactive Computing Ph.D. student Sindhu Ernala, can impact the inferences drawn from the data itself.
“We know survey questions have several well-documented biases,” Ernala said. “People may not remember correctly. They can’t keep up with their time. They remember recent things more accurately than those further in the past. All of this matters because error in measurement might impact the downstream inferences we make. Accurate assessments of social media use is critical because of the everyday impact it has on people’s lives.”
Indeed, Ernala and her collaborators found that these biases held up in many surveys. In a paper accepted to the 2020 ACM Conference on Human Factors in Computing (CHI), they picked 10 of the most common survey questions in prior literature that investigate time spent on Facebook. The questions were asked in a variety of ways: open ended or multiple choice, the frequency of visits or the total time spent. They asked these 10 questions in a survey to 50,000 random users in 15 countries around the world.
With self-reported data in hand, they compared it to the actual server logs at Facebook to see how it stacked up. Interestingly, people most often overestimated the time they spent on the platform and underestimated the number of times they visited. Specifically, in the 18-24 demographic, a common age range for research done at universities, there was even more error in self-reports.
“This is important, because a lot of our research is done with these age samples,” Ernala said.
With this information in mind, the researchers made a handful of recommendation in order to improve the data and, thus, the research around the data itself:
- As a researcher, if you are investigating time spent, consider using time tracking applications as an alternative to self-report time spent measures. These applications include things like Apple’s screen time feature or Facebook’s “Your Time on Facebook.”
- If researchers want to use surveys, which often makes sense, consider using the phrasing with the lowest error or multiple-choice questions.
The researchers caution against using time spent self-reports directly, but rather interpret reports as noisy estimates of where someone falls on a distribution. More important when determining wellbeing outcomes is how users actually spend their time on the platform.
“Social platforms change and user habits change over time,” Ernala said. “The questions now might not be the best questions five or 10 years from now. This is fluid, and we need to continue to look at this to make sure our past and future research is well-informed.”
She and her collaborators hope to contribute positively to this ongoing process by providing some validated measures that can be used across studies, while understanding that these methods may change over time as user habits transform.