Win rate drops. Pipeline coverage falls below target. Activity numbers slip. The standard managerial response is some version of “we need to improve win our rates” or “we need more pipeline” or “we need more calls this week.” The number becomes the problem to be solved. In one on ones or team meetings, the discussion is about the numbers and how to get them back up. Coaching, if it’s done, is just pressure around the metrics on their dashboards.
None of this is leadership, it is reactive management.
Despite our obsession with dashboards and metrics, the most important thing about these is completely missed. A metric is a measurement of an outcome. The outcome was produced by something. By behaviors, by decisions, by conditions in the market, by changes in the customer, by skill gaps, by process problems, by actions individuals take—or fail to do. The numbers tell you about the outcomes, you hit your call goal, you have 3X pipeline coverage. Something may have changed about these metrics, but none of the numbers tell you why things changed.
Managing to the number is managing to the wrong end of the causal chain. By the time the number has moved, the underlying causes for the change may have occurred weeks or months ago. Pipeline metrics are bad. But this is the result of months of people not doing the right things. Win rates are down, this is from months of poor deal management. Focusing on hitting the numbers doesn’t address the underlying issues that caused the numbers to fall.
Pressuring people about the number doesn’t address the underlying issues that created the number. This pressure just adds anxiety to a system already failing.
We see this in all forms. Managing the dashboard instead of the work. Managing the CRM instead of the person. Demanding the score instead of understanding what’s creating it. In each case, managers are distant from what’s actually happening. They treat the numbers as a representation of reality, when it’s nowhere close.
Metrics are particularly seductive because they feel rigorous. A number on a dashboard looks like understanding. It is not understanding. It is a prompt for understanding, and most managers stop at the prompt.
Real engagement with metrics looks different. A win rate drop is a question, a red flag, not an answer. The question is what’s changed. Did the customers’ buying process change? Did the competition change? Did our solutions no longer meet customer needs? Did the salesperson develop and execute a strong deal strategy? Did the customer fit our ICP? Is the market changing?
Each of these is a different problem requiring different corrective action. Focusing only on the number doesn’t tell you anything about which issue(s) underlie the drop. To understand this, you have to get beneath the number, understanding what caused it.
Investigating the number, by getting close to the deals, the people managing them, and customers, is the work of management. It is slower, harder, requires understanding, demands judgment. Only through doing this do we get the answers to why we are not hitting the numbers.
I recently wrote how we have to observe and assess “softer” behaviors (for example curiosity, accountability, discipline), we have to do the same with the traditional harder metrics we’ve been using for decades.
But just as we lost the capacity for assessment when we stripped trust out of the system, we lost the capacity for metric interpretation when we replaced manager engagement with dashboard review. Just as judgment was engineered out of decisions, judgment has been engineered out of how we interpret what the numbers are telling us.
The dashboard tells you what. Getting underneath the numbers tells you why. Without why, every assessment or judgment is a guess.
This is where AI changes everything, and not in the direction most leaders think. The pitch for AI-powered dashboards is that they give us more. More signals, more granularity, more cuts of the data, more anomaly detection, more correlations surfaced automatically, more recommendations attached to every movement.
And the pitch is true. AI dashboards do show us more, often with greater accuracy than anything we had before. The problem is that more is not better. More is worse.
When a manager had three numbers to look at, those three numbers might at least be familiar enough to provoke real questions. When a manager has thirty numbers, refreshed continuously, with alerts and AI-generated narratives explaining what each one means, the manager doesn’t engage more deeply.
The manager engages less. The volume of information becomes a substitute for understanding any of it. There is always another metric to look at, another data point to react to, another recommendation to consider.
There is always another metric to react to, another recommendation to consider. The manager spends the day moving through the dashboard, feels productive because there is so much to address, and never gets close to any actual deal or customer or person on the team. The richness of the dashboard becomes the perfect excuse for staying away from the work.
The deeper problem is what AI does to the thinking itself. When the dashboard not only surfaces the metric but also tells you why it moved and what to do about it, the cognitive work that used to belong to the manager disappears. Managers no longer produce a hypothesis, the system produced one. No thinking about the underlying cause, the system explained it. No decision about what to investigate, the system recommended the action.
What looks like augmentation is actually atrophy. Each time a manager accepts the system’s explanation rather than developing their own, the muscle of metric interpretation gets weaker. Each time a recommendation is executed rather than questioned, the capacity for independent judgment shrinks. Over time, you produce managers who can navigate the dashboard but who could not tell you, without it, what is happening in their business or why. The technology has not made them smarter. It has made them dependent, in ways they often cannot see, because the system keeps producing answers that look correct.
The dumbing down cascades. Sellers learn to manage to the same dashboards their managers use. Coaches coach to whatever the system flags. Leaders make strategic decisions based on AI summaries of AI analyses of dashboards no one has actually questioned. The organization gets very good at responding to what the system shows it, and very bad at understanding what the system might be missing or distorting. AI does not have to be wrong to be dangerous. It just has to be confident, and the confidence is built into the interface.
What this requires is a different posture, toward both our metrics and the AI that surrounds them. Stop asking “how do we improve this metric.” Start asking “what is this metric telling us, and what would I have to investigate to find out.” Treat every significant movement as the beginning of a question, not the end of an analysis. When AI offers an explanation, treat it as one hypothesis among several. When AI offers a recommendation, ask whether you would have arrived there independently, and if not, why not. Get close enough to the work to develop your own view, then use AI to challenge it rather than to replace it.
This is harder than driving the number, and harder still than accepting what the AI tells you about the number. It requires managers to be present in the work, to know their people, to talk to customers, to look at the actual deals. It requires the willingness to find out that the problem is not where the metric pointed, or where the AI pointed either. It requires judgment, which we have spent years engineering out, and trust, which we have spent years stripping away. The arrival of intelligent dashboards does not solve any of this. It deepens it, by making the absence of judgment harder to detect.
The metric was never the work. The metric was a signal that something underneath needed attention. The number is not the answer. The AI’s explanation of the number is not the answer either. They are both the moment when management starts, not the moment it ends.
Afterword: This is a great AI generated discussion of this article. They did make a mistake, claiming that I had written a book about dashboards. Perhaps, all the articles I’ve written on them would create a book, but I haven’t written that yet. Other than this, it’s outstanding!

Leave a Reply