Response latency has been widely used as a cognitive method to investigate how respondents answer survey questions and also as a method to identify problems with survey questions (e.g., flawed or poorly worded questions) or errors associated with respondents’ behaviors (e.g., faking). However, little attention has been paid to using response latency to investigate respondents’ motivations, in particular to detect responding behaviors that are less than optimal. Marginal responding behavior where reduced cognitive effort is expended is referred to as satisficing. The present study used response latency to identify satisficers and also used a forced time survey structure to encourage participants to optimally respond.
In addition, the study was designed to test the assumption that the construct validity (factor structure and model fit) of an instrument would improve if the researcher could encourage participants to optimally respond by discouraging them from satisficing. It also was designed to test a second assumption that construct validity would improve if the responses from satisficers were removed from the data set. To test the first assumption, a sample of 1,180 undergraduate students (sophomore, junior, and senior) was assigned to one of two groups: a not-forced-time group and a forced-time group. In the not-forced-time group, satisficing behavior was identified by capturing the response latency for each item. For the forced-time group, respondents were only able to progress at rate that would encourage optimal responding to the items. The factor structures and model fit of data from both groups were analyzed and compared. For the second assumption, the researcher identified and removed satisficing responses from the not-forced-time group. The factor structures and model fit of the data set with and without satisficing were compared.
The findings indicated no substantial difference in terms of the factor structures and model fit of the attitude scales between the two groups. The forced-time program, which was designed to encourage participant effort, did not appear to enhance the quality of the survey data. Also, the results of this study indicated the forced-time program had no meaningful effect on completion rate (i.e., breakoff). However, there was evidence of improvement in construct validity after satisficers were removed from the not-forced-time group. In other words, minimizing measurement errors caused by respondents’ satisficing behavior enhanced survey data quality. Based on the findings of this study, response latency can be used as an indicator to detect lack of participant effort (satisficing) in a web survey and, to some extent, enhance the quality of the survey data.