Stop doing horrible 360s.

Most 360s that I have encountered in organizations are poorly designed. They ask the reviewer to provide objective, holistic feedback on an individual.

They ask things like, “Is this person a great coach and mentor for their team?” And asks the reviewer to rate the person on a scale of 1 to 5, with 1 meaning never and 5 meaning always.

There are a few issues here:

  1. Being a coach and a mentor are two very different things. It is possible that a person is good at one and not good at the other.
  2. What would it mean to “always” coach and mentor? If that is even possible, it wouldn’t always be appropriate.
  3. It also assumes that the reviewer has a superhuman ability to know precisely how this individual is with their team at all times.
  4. There is also a more technical challenge in that Likert-style responses are treated as scalar responses when categorical. The distance between always and often is different from sometimes and seldom. Each number represents a unique category that should be treated as such. When these diagnostics are reported, people average the responses to get a score that is not appropriate. For example, if one person said you constantly coach and another said you never coach, that is not the same thing as two respondents saying you sometimes coach even though the average would cause you to think it is. You are showing up differently for two people, and you can’t average between their experiences.

While the first issue can be addressed by splitting the question into two (one for coaching and one for mentoring), that only worsens the second issue (how can you always be coaching AND mentoring) and doesn’t alleviate the third issue.

The third issue can be fixed by reframing the question to relate to the individual’s experience.

If you took all of the above advice, the question would become, “This person coaches me when it is appropriate to do so.”

However, this would still leave the final issue of how these questions are analyzed and the limitations of a Likert scale.

This can be solved by showing the results in buckets rather than as averages: 3 respondents answered 5, and two respondents answered 2.

Even with all this, there are issues. The subject won’t know the context that caused the answers. They may learn that two team members don’t feel they are coaching when appropriate, but it is hard to make it actionable without context. There are also ethical concerns that we will get into below.

You may think that professional 360 products like the current coaching favorite Leadership Circle Profile (LCP)will solve all these issues, right? Sadly, it won’t, as the LCP is similarly flawed (as well as having some other problems. But to learn more and what we can do about it, let’s dig deeper.

Why do 360s?

360s are a valuable tool to help individuals understand what is working and what isn’t. They can help an individual gain other perspectives on their work. This is what I would consider a development 360. The purpose is to help an individual grow and understand how their work and behavior is impacting others.

Sadly, 360s are often used for other purposes. It is not uncommon to require a 360 for promotions or as a precursor to putting someone on a performance review, and there are some serious ethical challenges with these approaches.

Promotion 360

Your manager is up for promotion, and you are asked to complete a 360 review on your experience working for them. All feedback will be anonymous.

What do you do? If your manager is promoted, that will create more opportunities for you. If your manager isn’t, they may be bitter to hear that negative feedback from their team held them back, and while your feedback is anonymous, you fear repercussions.

Performance Review 360

Your team member is struggling, but you like them and have a close relationship. You are asked to provide feedback and know that your responses could mean that they end up on a performance plan or let go.

You want to help your teammate improve but don’t want to make things tough for them. You know that being too critical in your feedback could cause a rift in your relationship.

The performance Review 360 is often even worse in that the questions presented are not looking to learn more about how an individual performs but to confirm the existing opinions of those running the 360. You can often spot the signs of this when the review is heavily weighted towards specific competencies and where the questions are looking for more critical rather than positive or neutral feedback.

In both cases, the respondent is put in a tough situation. While the promise of anonymity should make it easier to ignore your concerns, it often doesn’t.

Even when the extremes of these two cases are removed, you create this ethical dilemma when you ask one set of people to judge another. As we will see, there is a better way. But before we get there, let’s discuss the Leadership Circle Profile.

Leadership Circle Profile

The Leadership Circle Profile (or LCP) was created by Bill Adams and Bob Anderson and was explained in their book Mastering Leadership. It is, like most assessments aimed at coaches, an expensive product. Coaches pay thousands of dollars to be certified in administering the 360 and then pay hundreds of dollars for each 360 they run.

The LCP is based on Robert Kegan’s adult development theory. This theory can be helpful in specific contexts and used in certain ways, but many have pointed out the dangers (including the comparisons to eugenics). I won’t go into the details of that critique here, but plenty of information is available from Dave Snowden, Nora Bateson, and others.

Setting aside the theoretical foundation and the fact that this survey only looks at 2 of the 4 Kegan levels applicable to adults, the LCP is as flawed as most other 360 formats.

It consists of 124 questions using a Likert scale of 1-5, where 1 means never and 5 means always. However, they have made the interesting decision to allow half measures (1.5, 2.5…), meaning that each question has nine options and the ability to say not applicable.

From this, the survey produces a lovely circle chart that shows the user’s self-rating measured against the average response from other respondents. In addition, there is a deeper walkthrough of the results produced.

This should sound pretty similar to that homegrown 360 I mentioned previously. It follows the same general model, and the questions focus on one attribute, for example, “This leader is a calming influence in difficult situations.” or “This leader dictates rather than influences what others do.”

If you have read this far, you know these questions are not great.

The issues with the questions are similar to what we have seen before:

  1. It asks the respondent to respond without context and to average their experience.
  2. It asks for a universal perspective rather than a personal impression. A respondent can’t know whether others see them as calming or not, as people may respond to the subject differently.
  3. It assumes that always doing something is the best action. Should you always be calm in difficult situations? It depends on what the context requires. Sometimes, people need to feel calm; other times, invigorating is better. Assuming that one approach works in all situations is a flawed notion. Is influencing always better than dictating? It is most of the time, but dictating is often a better approach in a true crisis.
  4. It is also clear from each question what a good and a bad response is, which means that the ethical questions of casting judgment on others also apply.
  5. Finally, when transposed to the circle, the Likert format has the same challenge of treating categorical data as scalar.

This doesn’t mean that these styles of 360s are useless. They can provide high-level directional feedback that can be useful. However, it raises the question of whether these 360s are worth the hype and cost.

OK, 360s are broken, but what can we do?

There are options. Ideally, you want to capture context-rich narratives about the subject that avoid judging the individual.

If you must do a Likert-based survey:

  1. Ensure each question only asks about one behavior.
  2. Ask the question about the individual’s experience, not about a general characteristic of the subject.
  3. Have the question focus on whether the behavior is appropriate in context.
  4. Collect comments to provide extra context for each answer.

Generally, open-ended questions are a decent approach if you must do a survey. Make sure questions ask for both the positive and the negative.

  1. Ask what a person should start doing, stop doing, and keep doing is a simple formulation that can provide alternative perspectives.
  2. Ask about positive and challenging experiences that the respondent has had with the subject.

An even better approach is an interview-based process, where you can ask each respondent what it is like working with the subject and then drill in and ask for more details, focusing on stories and actual events and avoiding summaries and judgments.

Another option is to use a narrative collection tool like SenseMaker® to have respondents share stories of the subject and signify those stories. I have experimented with this approach, and it shows a lot of promise compared to standard 360s but doing it well requires ongoing effort from respondents, which can make it challenging.

Just because an approach is not perfect doesn’t mean it can’t be valuable, but understanding the shortcomings of any assessment is critical to getting the most out of it.