There are countless data reporting tools available to sellers and managers. Each provides it’s own take on performance dashboards. Many provide deep insights into performance—your pipeline is anemic, your opens are declining, you aren’t hitting your meeting goals, your win rates are declining. Much of this is helpful, but sometimes there is too much data and we are overwhelmed–what do we pay attention to, what do we address first, what change will give us the greatest performance leverage?
One of the biggest issues I encounter in speaking with managers is understanding, “What’s causing these to happen?” We have the data, but we don’t understand what created the results—at least deeply. For example, we see that our outreach is not producing the results we expect. Opens are down, phone conversations are down, meetings are down. But the data seldom tells us, why this is happening? Are we doing the outreaches to the right people? Are we engaging them in issues that are important and relevant to them? Why aren’t they responding? Alternatively, What do we need to change in how we are executing these outreaches? Do our people have the right skills/tools to be able to execute these well? How do we improve the results of our outreaches?
Likewise, we look at our pipelines, they are anemic, declining win rates, declining average deal sizes, lengthening sales cycles. What do we do? What’s causing these things? What do we prioritize, what change gives us the greatest leverage? What might change might provide the greatest impact in the shortest period? How do we make things things happen?
The numbers and data can only tell us so much. They can alerrt us to performance issues. They may help us isolate certain areas of performance challenges. But the data doesn’t tell us why these things are happening, what we must change to correct it, what risks there may be in making the changes, what time to results might be.
And usually, there is no single solution. For example, there are a variety of fixes to anemic pipelines–more prospecting, improved win rates, improved deal sizes, reduced sales cycles, and so forth. Which alternative do we choose? Which is most likely to be successful? Which has the shortest time to resutls?
And then we look at the capabilities of each individual. While at a high level, the data may not show much difference, the ability to drive the necessary changes will be different based on the capabilities of the individual. An example I often use is, “Pete and Dave have $5M quotas, $10M pipelines, similar average deal sizes and sales cycles. Pete has a 40% win rate, Dave has a 20% win rate. What do you do?” 85% of the time, the audience responds, “They need to prospect, they need 3X pipelines!” But that is the wrong answer for each.
The issue we face in looking at the numbers and doing “sales math,” is getting beneath the numbers. We have to understand what’s causing these results. As leaders, we have to get our from behind our screens and reports, understanding what’s actually happening, what’s driving these numbers. How effectively are people executing their outreach? Are they doing the right things in the right way–or do they need to change. How effective are they in executing their deal strategies or account/territory plans? Are they connecting with customers in meaningful ways that are important to customer engagement?
As they dive into the numbers, they may face individual performance issues. How do we coach the people to correct them? Do they need training or skills development? Do they need to use the processes and tools more effectively? If there are multiple ways to address the performance, and there always are, which alterative is most likely to produce the best results in the shortest time?
If we see systemic issues impacting the entire organization, what are they? What do we need to change in what they are doing? Or do we need to change our strategies to address shifts in the markets/customer sets?
The numbers act as “red flags” or alerts. They tell us something is happening, they may help us isolate or better define the problem, but they don’t provide us the insight into what is causing the problem, they don’t show use what is actually happening and how people are executing. Until we dive into these issues, understanding what’s really happening, we don’t know how to drive performance improvement.
In the end, too often, the numbers just show us that math works.
Leave a Reply