By Nils Reisen on 04.06.2020 | 4 minutes reading time
The following argument is often made: if the survey is not anonymous, employees won’t dare tell the truth and very important information is lost. Is this true? When we developed Pulse, we wanted to get to the bottom of this question empirically and ran a little experiment.
To answer this question, we conducted a test survey on the topic of employee engagement in a large company. For our experiment, we selected pairs of very similar teams from various areas of the company. Of each pair, one team was randomly assigned to one of two groups: 1) “anonymous feedback” and 2) “open feedback.” The other team was assigned to the remaining group. A total of roughly 270 employees in 28 teams were invited to the survey.
Group 1: anonymous feedback
In this group, the feedback was completely anonymous, as in the previous employee survey. The employees received the following information before the survey was launched:
Group 2: open feedback
The participants of this group were informed that their feedback is openly visible within the team and that the results will be sent to them after the survey is completed.
The results were sent to each team individually. Participants’ ratings of the individual questions were reported in the form of team scores*. In addition, the comments were also displayed, with the respective names of the team members.
Before the survey, both groups were informed which feedback was visible in which form. But, of course, they did not know that there were two different groups. Apart from the visibility of the results, there was no difference between the groups. All participants answered 26 questions on a scale from 1 (disagree) to 10 (fully agree). There were two comment fields for the first question: “I like that” and “I wish that.” This is where it got exciting: were there differences in the answer tendencies between the two groups?
The ratings were more critical if the feedback was anonymous
There was actually a difference. Participants in the “anonymous feedback” group tended to give more critical answers. And, on average, their ratings were half a point on the 10-scale lower than in the group “open feedback.” Social desirability and the knowledge that team members or supervisors could see their evaluation seemed to influence the employees' responses in the “open feedback” group.
Does this mean that anonymous surveys are better?
Our answer is a clear “no.” Since the effect of social desirability is less pronounced in anonymous surveys, it is possible that these ratings may reflect the actual assessment of the situation more accurately. But, on the other hand, the evaluations might be overly critical if the employees answer strategically and want to build up pressure through critical evaluations.
Also, in our work, we have often observed a tendency to “let off steam” in anonymous surveys. In other words, anonymity encourages people to vent their frustration and to present exaggerated portrayals. Further studies are needed to find out in detail which of these factors is responsible for the more critical evaluations and to what extent.
In the end, however, the most important factor is how the results are concretely used. The differences observed are too small to draw an alternate conclusion from the results. But, regardless of whether a question was rated 6.5 or 6 on average, there is potential for improvement and makes a deeper examination of the topic necessary. Based on the ratings alone, it is very difficult to identify the drivers for dissatisfaction. Only with open feedback and in-depth discussions on the points mentioned above, is it possible to understand the causes behind the ratings, to draw the right conclusions and thus also to set the right levers for improvement in motion. We have already described this in detail elsewhere.
In our experience, a constructive and open feedback culture must first develop and this takes time. We did this test before the very first Pulse survey. Since then, the feedback culture in this company has developed considerably (details will be available soon in another article). So it may well be that the same test would be different today.
*The score is calculated as follows: The ratings for each question are assigned to one of three possible groups: “Improve” (1 to 6), “Neutral” (7 & 8) and “Keep it up” (9 & 10). The score is calculated as the percentage of participants in the group “Keep it up” minus the percentage of participants in the group “Improve” (the results are shown without "%".) The lowest possible score is -100, the highest possible +100.