A normal employee and an anonymous employee give feedback

By Nils Reisen on 04.06.2020 | 4 minutes reading time

The following argument is often made: if the survey is not anonymous, employees won’t dare tell the truth and very important information is lost. Is this true? When we developed Pulse, we wanted to get to the bottom of this question empirically and ran a little experiment.

To answer this question, we conducted a test survey on the topic of employee engagement in a large company. For our experiment, we selected pairs of very similar teams from various areas of the company. Of each pair, one team was randomly assigned to one of two groups: 1) “anonymous feedback” and 2) “open feedback.” The other team was assigned to the remaining group. A total of roughly 270 employees in 28 teams were invited to the survey.

Group 1: anonymous feedback

In this group, the feedback was completely anonymous, as in the previous employee survey. The employees received the following information before the survey was launched:

Screenshot with Text Your data will be treated confidentially. Survey and analyses are carried out by an independent market research institute, which provides us with the data only in anonymised form."

Info page in the group “anonymous feedback” (effective presentation in experiment visually different, company information removed)

Group 2: open feedback

The participants of this group were informed that their feedback is openly visible within the team and that the results will be sent to them after the survey is completed.

Screenshot with Text With Pulse we want to learn from each other. The best way to do this is for everyone to see your feedback: your ratings are display as team scores, your comments with yout team membership. Members of your team will also see your comments with your name and photo.
Info page in the group “open feedback” (effective presentation in experiment visually different, company information removed)


The results were sent to each team individually. Participants’ ratings of the individual questions were reported in the form of team scores*. In addition, the comments were also displayed, with the respective names of the team members.

View of a Pulse comment

Comment presentation in the group “open feedback” (fictitious data, effective presentation in the experiment visually different and without photo)

Before the survey, both groups were informed which feedback was visible in which form. But, of course, they did not know that there were two different groups. Apart from the visibility of the results, there was no difference between the groups. All participants answered 26 questions on a scale from 1 (disagree) to 10 (fully agree). There were two comment fields for the first question: “I like that” and “I wish that.” This is where it got exciting: were there differences in the answer tendencies between the two groups?

The ratings were more critical if the feedback was anonymous

There was actually a difference. Participants in the “anonymous feedback” group tended to give more critical answers. And, on average, their ratings were half a point on the 10-scale lower than in the group “open feedback.” Social desirability and the knowledge that team members or supervisors could see their evaluation seemed to influence the employees' responses in the “open feedback” group.

Does this mean that anonymous surveys are better?

Our answer is a clear “no.” Since the effect of social desirability is less pronounced in anonymous surveys, it is possible that these ratings may reflect the actual assessment of the situation more accurately. But, on the other hand, the evaluations might be overly critical if the employees answer strategically and want to build up pressure through critical evaluations.

Also, in our work, we have often observed a tendency to “let off steam” in anonymous surveys. In other words, anonymity encourages people to vent their frustration and to present exaggerated portrayals. Further studies are needed to find out in detail which of these factors is responsible for the more critical evaluations and to what extent.

In the end, however, the most important factor is how the results are concretely used. The differences observed are too small to draw an alternate conclusion from the results. But, regardless of whether a question was rated 6.5 or 6 on average, there is potential for improvement and makes a deeper examination of the topic necessary. Based on the ratings alone, it is very difficult to identify the drivers for dissatisfaction. Only with open feedback and in-depth discussions on the points mentioned above, is it possible to understand the causes behind the ratings, to draw the right conclusions and thus also to set the right levers for improvement in motion. We have already described this in detail elsewhere.

In our experience, a constructive and open feedback culture must first develop and this takes time. We did this test before the very first Pulse survey. Since then, the feedback culture in this company has developed considerably (details will be available soon in another article). So it may well be that the same test would be different today.

*The score is calculated as follows: The ratings for each question are assigned to one of three possible groups: “Improve” (1 to 6), “Neutral” (7 & 8) and “Keep it up” (9 & 10). The score is calculated as the percentage of participants in the group “Keep it up” minus the percentage of participants in the group “Improve” (the results are shown without "%".) The lowest possible score is -100, the highest possible +100.

Illustration with a letter in an open envelope

Subscribe to our newsletter