Senior Educational Consultant
June 17, 2013
By Steve Benton
From time to time instructors raise concerns about what they call “survey fatigue.” They fear that college students are asked to complete too many surveys and may be reluctant to respond to requests to rate another course. Moreover, some worry that the 47-item IDEA Diagnostic Form (DF) is too lengthy—that students will not finish the entire instrument.
We understand these concerns, which is why we regularly track student response rates. The percent of students who respond to the invitation to complete IDEA student ratings is high for both the DF and Short Form (SF). In the 2012 research database, the response rates were nearly identical for the paper format: 81% for the DF (86,319 classes) and 80% for the SF (32,981 classes). For the online version, the DF had a slightly higher response rate (67%) than the SF (62%). So, contrary to expectations, students are somewhat more likely to respond online to the longer DF than the SF.
But what effects do survey length and format have on student non-response rates to individual items? Does the percent of students not responding to a given item increase as the survey lengthens? In other words, do they become “fatigued” as they go through the instrument? Moreover, are non-response rates to items that come at the end of the lengthier DF much higher than those at the end of the SF? Finally, does administering the surveys online versus on paper make a difference?
To answer these questions, we looked at the percent of missing values for each item on the DF and SF in approximately 3 million individual student ratings. The good news is that the percent of non-respondents to each item, on average, is about 1% across both the DF and SF. So, that means in a class of 100 students no more than about 1 student, on average, would fail to respond to a given item.
However, there are very slight trends to respond less frequently as the item number increases. Specifically, the percent of non-response increases slightly on the last few items of the DF to about 2%. But, those last few items are for the most part experimental and do not affect the overall instructor or course evaluations.
The number of unanswered items in all forms is quite small for all items in the questionnaires, across both online and paper formats. So even though there is slightly higher non-response toward the end of the DF, it is not likely to be of any practical importance: The number of students out of 100 choosing not to respond to the last few items is, on average, no more than 2.
The bottom line is that users of IDEA need not be overly concerned that “survey fatigue” adversely affects student ratings.