In selling (marketing, sales, customer experience) and business, in general, data, measures, metrics, and KPIs are important, and, sometimes, helpful. It’s become fashionable to quote, Goodhart’s Law, which states, “When a measure becomes a target, it seems to become a good measure.”
Well……, Yes……, but what does that mean?
When you think about it, Goodhart’s statement is less about measures and more about human behavior.
Let’s start at the beginning.
A measure is simply that. It’s a number or indicator of some sort. For example, a measure may be “I made 50 calls…” or “I sent 100 emails …..” or “I had 50 ‘likes’….”
A measure doesn’t imply goodness or badness, it just is a number/indicator.
A metric starts putting context around a measure.
A metric might put a time element around a measure, for example, “I make 50 calls a day.”
Like a measure, a metric doesn’t imply goodness or badness. It just gives us a better understanding of a measure.
For various reasons–some good and necessary, some bad, but perhaps also necessary, we put some sort of value around measures and metrics.
For example, as human beings, we tend to think bigger is better. So we might look at, “Dave made 50 calls a day, and Joe made a 100 calls a day.” We would tend to make a judgement that Joe did much better than Dave. And that judgement would be good, if all we care about is calls.
Goals are an “end to which we direct effort.” So we, through various processes, develop goals for our metrics. In doing this, we start establishing goodness and badness, achieving the goal is good, not achieving it is bad. So if we set a goal of “100 calls per day,” Joe is clearly doing well and Dave is not.
In establishing these goals, we want people to figure out how they achieve the goals. We want to train them, giving them the skills to help them achieve the goals, we want to give them tools, we want to coach them to achieve the goals.
When people don’t achieve the goals, ideally, we want to understand why, and what needs to be done to correct things. Sadly, too often, we don’t do this, we just point out the obvious, expecting improvement. It might look like this, “Dave, your call performance sucks, fix it or I’ll find someone else.”
Goals are good. They give us direction, they help us understand what we need to be achieving or enable us to measure progress. They drive our behaviors, sometimes positively, sometimes in unintended ways. I’ll get back to this later.
But now we start seeing some challenges with our measures and metrics. For example, “what is a call?” Is it a dial or attempt? Does it include a voicemail? Is it a conversation? Is it some type of conversation? For example a conversation that ends in “F**k you! Take me off your lists” is clearly different from “Tell me more.”
Implicit in our metric is an outcome, but too often, we don’t clearly define that outcome. Too often our metric focuses on effort, not impact. When we put a metric in place, we have to be very clear about what it means and what the outcomes from the activities we do impact our ability to satisfy the metric.
Now here’s where human nature and behaviors come back into play. People have the tendency to to things in the simplest way possible–to achieve their goals. Stated differently, they game the system.
And, in reality, we want them to game the system. It is their conscious or unconscious way of simplifying things, of becoming more efficient. But we want them to game the system in a way that produces the outcomes we want.
As an example, a call center client, put in place metrics around “number of conversations a day.” They set a goal for these conversations, they counted conversations that lasted 90 seconds or more, as conversations that would be counted. After a couple of weeks, we noticed one individual was consistently overachieving his goal. Since the calls were recorded, we listened to them, thinking we could learn what he was doing and train others to do the same thing. He kept every call to prospects and customers to 90 seconds, creating an excuse, “Can I call you right back?” Where we thought he was having calls with separate prospects (which was our intent), he would actually be having longer conversations with very few people, but interrupting them at the 90 second mark. (There was a huge amount we learned from him, not just gaming the metric, but about how he engaged customers in great conversations).
In his “gaming” of the system, we discovered there were unintended consequences to how we defined the metric. He was, in fact, doing what we were measuring and hitting his goal, we had just made a mistake in defining the metric.
Early in my selling career, my manager was concerned about sales people spending too much time in the office. He wanted sales people to be out of the office with customers. So he instituted a fine of $10 if we were in the office for any other than a few reasons (like bringing in an order). His purpose was to get us to spend time with customers, but often, we couldn’t do it. So we started going to the movies, when we couldn’t get meetings (that was when you could get into a movie for $10). We figured, if it was going to cost us $10, we might as well have fun.
We see this gaming process everyday, not through maliciousness on the part of the people doing the gaming, but through lack of clarity in the metrics and the outcomes we want people to achieve.
Number of calls? Check, I dialed 50 numbers, didn’t talk to anybody, but….
Number of emails? Check, I sent 500 emails, none were opened, no responses, but….
3X pipeline? Check, I’ve loaded my pipeline with crap….
We start seeing the challenges, even with a single metric and the goal associated with that metric.
Lesson 1: We have to be clear about the metric, focusing more on outcomes and impact, not effort.
Lesson 2: We know every metric will be gamed. We have to make sure that gaming achieves the outcome we want. Stated differently, we need to be aware of unintended consequences.
But business and our jobs are not that simple. We don’t do just one thing. We do a lot of things. As a result, there are lots of metrics we put in place to track what we are doing, and lots of goals to establish performance standards, ideally to help us achieve the outcomes we desire.
Sometimes, we overwhelm our people with too many metrics and goals. They don’t know how they relate, which they should pay attention to, where and how they should be doing their jobs. So they game the system, focusing either on doing just enough, doing what they’ve always done (thinking this too will pass.), or just giving up.
Sometimes, the metrics are in conflict with each other, confusing the people on where they should be focusing or how they resolve the conflict.
Lesson 3: We want to have the fewest metrics and goals possible. We want them to be clear, unambiguous and to make sense when looked at collectively. State differently, not too many, not too few, just the right number.
We tend to establish our metrics and goals in the wrong way. We focus on our own realm of responsibility, optimizing them for our jobs. For example marketing may have a whole set of metrics/goals which optimizes their performance. Sales has metrics/goals optimized for theirs, as does customer experience.
Yet, when taken together, they are often in conflict.
Even within our functions, we optimize around certain aspects of the job and not the whole job. For example, we may optimize around prospecting, while neglecting our abilities to manage and close qualified deals.
Alternatively we do something that makes a huge amount of sense, we start at the beginning. Using this logic, we would start with customer visibility, attraction, moving into qualifying, doing deals, etc.
(For those systems theory people, you recognize we are dealing with the relationships of complex systems and subsystems.)
While counter-intuitive. We actually have to start our metric and goal setting processes at the end, then work backwards. We have to establish our end goals and metrics, then work backwards, understanding what creates that, then what creates that, and so forth. All the while, keeping it as simple as possible.
Related to this, we have to make sure our metrics and goals are directly tied to the strategies, metrics, and goals of our company. So our starting point is not our department or function, but working backwards from our company’s strategies, goals, metrics.
Lesson 4: In establishing metrics and goals, start at the end and work backwards.
Corollary 1: If our metrics and goals aren’t aligned or connected to our company strategies, metrics, and goals, we probably have the wrong metrics and goals (Note: I’ve written about OKRs, it is one of the most powerful ways of establishing this alignment.
As you see, this can be very complex, and we, often, use this complexity as an excuse keeping us from doing the right things. We will never be perfect with what we do, but we have to recognize we must constantly learn and adapt these, based on our experience. Are we focusing on the right things? Are we understanding the behaviors they drive, are they the right behaviors? Have things changed, demanding a different goal? Of a different metric and associated goal? In reality, this is a case where “just good enough,” is, well…., just good enough.
Lesson 5: We will never establish the perfect set of metrics and goals. We are better in looking for “just good enough.”
Corollary 2: No metric or goal is “forever,” Goals will constantly change, for any number of reasons. Metrics will also change, based on our priorities.
Now that we’ve gone through all of this about establishing metrics and goals, understanding human behavior and our tendency to simplify, gaming the process, perhaps one of the biggest mistakes we make is not in the establishment of metrics and goals, but in paying attention to what the results are telling us.
Let’s go back to our favorite metric, the number of calls we make every day. Let’s assume, we’ve defined the metric in terms of the outcomes we expect, we’ve established a goal that is meaningful in terms of our overall quota objectives and so forth.
Now we see, we aren’t achieving the goal. Too often, the tendency is to focus on the goal itself and if we aren’t achieving it, the answer is easy–“Do more!” In reality the failure to achieve a goal is just an indicator, think of it as a “red flag.” It tells us something isn’t working as we anticipated, but it doesn’t tell us what is wrong or why. This is the tough work that we have to do to drive performance. It may be an overall organizational issue, where we have to look at what we are doing organizationally, how we change, how we improve. It may be an individual performance issue, where managers must work with each person to understand what’s impacting their ability to hit their goals and how to help improve performance.
Crassly, all these metrics and goals are absolutely useless unless we pay attention to what they are telling us, drilling down to understand what is happening, why, and what we need to do about it.
Lesson 6: All of our metrics and goals are useless unless we pay attention to the data, understand it, and take corrective action!
Finally, in wrapping up, I’ll drop an idea, perhaps coming back to it in a future post. Think about how our algorithms impact what we see, and how people learn from these algorithms and start to game them. Algorithms, are basically complex combinations of measure, metrics, and goals. We tend to think the algorithm is about the data–it is, but as human beings we respond to the data that we see, gaming it to our advantage. For example, as we have seen the LinkedIn algorithms change, we have seen complete shifts in the content as people game the process.
The issue with this is profound, we’ve seen huge impacts driven by relatively few people who game the algorithms. But since we tend to “trust” the data and algorithms, we have the possibility of falling victim to people’s abilities to game these systems.
More later….perhaps.
Leave a Reply