In virtually every conversation with a sales executive, at one point we get into a discussion about forecasting. Usually it starts with the executive mumbling something like, “Damn forecasts are worthless, I might was well flip a coin……..” Being very cue sensitive, I usually reply something like, “Forecasting system not working for you?”
Then I just sit back and listen. No one has accurate forecasts. They put in strange processes to increase the accuracy of the forecast. I see strange sequences of forecasts, the “This may happen forecast,” moves to “I hope this will happen forecast,” to “I’m sure this will happen because you’re telling me it needs to happen forecast.”
Some companies adopt interesting names, “Firm forecast,” I wonder what the not firm forecast is–yes, I’ve seen some organizations that have a “soft forecast.” Some organizations have “Blood forecasts,” I wonder if blood forecasts are preceded by “Pinky Swear forecasts.” I get the image of the Yakuza in the movie “Black Rain,” (yeah it’s old). He has failed to meet his objectives, he sits at the end of the table, wraps his finger tightly in a handkerchief and cuts his finger off. Perhaps I should start counting the fingers on sales managers I meet? Wonder if this technique will improve forecast quality?
There are lots of problems with forecasts. It is always a challenge to develop an accurate forecast, but here are some thoughts that should help you improve the accuracy.
1. What are we forecasting? A forecast isn’t “will we close this order some time?” A forecast is: “We expect to get and order for this amount by this date.” It has to have specificity in time and value. We measure our forecasting accuracy, based on our ability to accurately predict this time and value. So even if we get the $1 M order, if we get it 180 days after it was originally forecast, we have a business management and forecast integrity problem.
2. A forecast is about a deal, not a number. I talk to too many managers who are achieving forecasting to a number, “I have to forecast $10M this month!” They then look at their pipelines, saying, “I’ve got $20M of deals going into closing, surely I can make $1oM out of those.” The forecast is about a deal, “We believe we will close this deal for this amount by this date or time frame.” The forecast number becomes the aggregate of those deals closing in the timeframe. If you fall short of your target number, then you have to look, deal by deal, determining: “What can we do to close this deal by this date?” It requires a specific action plan and very close alignment with the customer’s buying process. You close the gap in a forecast by identifying specific deals and actions you are going to take to close those deals.
3. Bad forecast integrity indicates deeper problems with pipeline integrity and your selling process. The forecast is a natural outcome of the sales process. At some point in our selling process, we have confirmed where they are in their buying process, when they will make a decision (pinky swear), and their attitudes regarding our solution versus the alternatives. Based on where we and the customer are in the process, we can forecast it. We won’t hit it all the time, unexpected thing may happen, but the process is pretty simple and can be very accurate.
So if we start missing forecasts consistently, it creates deeper concerns. What’s the quality of our pipeline? Is it high quality, or do we have garbage that is misleading us? How well are we executing the sales process? A forecasted deal that slips,,,,, and slips…. and slips, shows a problem with the selling process. You either aren’t using it, it isn’t aligned with the customer buying process, it’s a bad process.
Bad forecasts bring the quality of the entire pipeline, your sales process, and your deal strategies into question. Everything is up for grabs, not just the forecast. We have no idea what the problem is, we had no idea what to do to fix it. Is it a skills problem? Is it a competitiveness problem? Are we seeing shifts in the markets? It no longer is an issue about making the forecast, but it’s an issue about the organization’s overall ability to consistently achieve it’s goals. We don’t know what to attack to improve the business.
4. When you have consistently bad forecasts from an individual, it’s probably a skills problem. They probably don’t understand or aren’t using the sales process, they may be having problems with certain parts of the sales process, they may be chasing the wrong deals. It’s actually pretty easy to start identifying this looking at their pipelines and deals.
5. When your organization has a consistently bad forecast, you have a systemic set of problems that you need to understand and fix. You don’t fix these problems by putting more rigor or steps into your forecasting process; “We start with pinky swears, then move to blood commits, then move into be-headings?” Putting these phases into your forecasting process is dealing with the symptoms, not the root problems. Focus on the quality of your pipeline, clean out the garbage. Focus on the sales process, is it right, is it aligned with customer buying processes, are your people executing it with integrity?
6. Leverage the reports in your CRM system to better understand the quality of your forecasts and begin to isolate problems. You can easily look at slips/misses (I used to have a “slip factor” I could apply to each of my regional vice presidents.). Pipeline quality reports should be directly correlated to the quality of your forecasts. Stuck deals, average age in phase, total sales cycles, pipeline velocity, win rates, all are indicators and impact forecast quality. Opportunity reporting gives you great clues. Consistently changing “estimated close dates,” lack of progression, and others are directly tied to forecast quality.
There are other things you can leverage to improve the quality of your forecasting. In some cases, data analytics, tied to your deals can help improve the quality of forecasting. Assessing “odds to win” and “timing” using tools focused on the customer decision-making and buying can improve the quality of the forecast. I cover a lot of these in my white paper, “Beyond The Crystal Ball, Issues In Sales Forecasting.” Just email me for a free copy. Send your full name and emails address to dabrock@excellenc.com. You may also be interested in one of my past articles on the same topic: The Most Used Useless Metric In Sales.
Bad forecasts are a problem. Consistently bad forecasts indicate you have a deeper problem—and it’s not about forecasting!
Håkan Bernhardsson says
Well put!
Another aspect of forecasts is that some markets are extremely volotile, so it is not so much to work on forecast quality as it is working on risk management of forecast inaccuracy.
I see that companies tend to focus too much on how to increase the quality of the forecast instead of securing the risk management. Once you have risk management in place you can see what the cost is and make a plan to reduce the cost by increasing forecast quality in targeted areas like you state in Point 6.
When it comes to pipeline management it does not help to simply critisize the numbers. Managers need to find out what is going wrong in the pipeline and coach sales reps to improve in specific areas. It’s also managements’ task to read the pipeline and support their sales team by pinpointing deals that can be closed in order to meet the forecast and get them prioritised over for example making cold calls, well aware that this may a risk for future forecasts.
David Brock says
Hakan: Thanks for the great comments. You are absolutely on target. We have to work both ends of the issue. We have to increase forecast accuracy–but sales forecasts will always have inaccuracies. The marginal effort to improve the accuracy is not worthwhile. Having said this, as I discuss in the article, too often, people are looking at the accuracy of the number and not the underlying drivers which impact overall sales performance.
The risk mitigation element is another key factor. This is probably not the sales person’s responsibility, but more of sales management working with the rest of the organization. There are a number of other tools that go into the forecast. In many areas, past buying analysis can do a tremendous amount on tweaking the accuracy of the forecast.
Too many managers don’t look at the other sources of information and the risk mitigation element of the forecast. Thanks for reminding us of this. Regards, Dave
Peter Button says
Well done David, your post is evidently based on lots of experience!
I suspect that once or twice you may have had to make some changes to forecasts that have been submitted to you before you brief your senior colleagues…
I am wrestling with this issue of forecasting with a couple of teams that we’re working with.
I would be very interested in your experience of the value of league tables of forecast accuracy within teams. It seems self evident that feedback and coaching based on comparisons between forecasts and actuals are critical for learning. Have you found that it helps to make forecast accuracy visible to the colleagues of sales people?
Thanks for the quality of your posts!
David Brock says
Peter, thanks for the flattery and thoughtful comment. I really appreciate both. I’ll try to address you questions—perhaps I should write another post, because they are great issues.
1. Forecasts will never be totally accurate. Too often, management makes mistakes in trying to achieve total accuracy, but the marginal efforts of doing so doesn’t create value to anyone. Forecasting is part science/discipline but an element of art. There are other sources of information than the sales organization that can help improve the quality of forecasts. For example, if we sell oil, we can apply some rich analytics, trend analysis that can be very helpful (Oil is just an example. I’ve even seen the rich analytics applied to sales person “accuracy” in very powerful ways.
2. I think managers miss a tremendous diagnostic opportunity–both for the organization and individuals with the information that forecasting/accuracy provide us. There is so much focus on getting the right number, but too little on what is causing the inaccuracies and what does this mean to the overall organizational performance. For example, very often, the key issue is not if the deal comes in, but when it comes in. Forecast slip is a tremendous problem that tends not to be issued at a root cause level. Forecast slip indicates tremendous problem I the execution and alignment of the selling/buying process. It inevitably reduces win rates (or shows that we can win more than we are).
I’ll stop here, you’ve given me great ammunition for another post! Thanks so much.
Ryan Neu says
First off, David, amazing post and very insightful (I am an inside sales rep on a team of 150+).
Peter, regarding your comment, I personally think that the biggest driver of accuracy is frequency. Many forecasting tools are built for management but in the end, it is us, the sales rep, that is responsible for keeping it up-to-date. So why is it that forecasting tools are made for management but required for reps?
Once forecasting is solved for the REP it becomes exponentially more valuable and accurate for the MANAGER. The best way to do this is to add value at the rep level. Once forecasting becomes a sales pipeline management tool at the rep level, the rep WANTS to use it vs. being told to do so.
The frequency of the updates lead to real time, accurate forecasts. As David pointed out, it’s nearly impossible to get a 100% accurate forecast but our company has found this process to help tremendously.
**we built a tool internally to accomplish this. It is a free forecasting/pipeline mgmt tool so feel free to check it out. http://www.atquota.com**
David Brock says
Ryan: Thanks for the thoughtful comment and insight. You make a number of great comments. I think one of the biggest challenges is that sales people don’t understand the purpose and importance of forecasts. It just tends to be something management imposes, often to beat them up.
Also, accuracy comes from having a great sales process in place and leveraging it well. Absent that, regardless of how frequent, the forecast is just a guess.
You are fortunated in that you have a huge number of great tools that everyone in the organization uses well and management uses well! Thanks so much for the great comment. Regards, Dave
Christian Maurer says
Hi David,
I also have come across “Blood Forecasts”. In my case it always had to do with the grade of commitment (i.e. I would sign with my own blood that I am going to make this number).
I also like your definition, focusing on when an order can be booked for what amount. I would put the emphasis in the word “order”. I am always amazed how many sales leaders still ask their people to forecast revenue. Especially for complex sales situations, there are too many variables out of control of the sales person (e.g. supply chain constraints, or revenue recognition rules). In such situations the sales person cannot avoid forecast errors. Other constituencies in the company will have to make efforts
I could not agree more with you that forecasting is assessing each deal. Too many managers still believe that a number can be managed. It obviously can’t. The forecast number is an output (lagging indicator). What can be managed is the process leading to this number.
I am not so sure I share your view on forecast accuracy. Accuracy depends on demand fluctuation which is cannot be controlled. What you can control though is forecast errors. Especially those that create a systemic bias are to be eliminated (constant under- or over-forecasting). Especially the supply chain suffers from those errors as they can cause obsolete stock or material shortage resulting in missed delivery deadlines.
Considering that also makes clear why benchmarks on forecast accuracy are not very useful. They are only meaningful for exactly identical demand patterns.
A much better question is to ask whether the forecast process adds value to the company. For answering this question forecast results are compared to a random walk approach.
Overall though, I am glad you put attention to the forecasting issue as you did. It always feels good to see others promoting similar ideas to increase sales management effectiveness.
David Brock says
Christian: Thanks for taking the time to add these great ideas to the post! I like your distinction between forecasting accuracy and errors. I’m tending to think the reduction in errors is the more important issue. Having said that, accuracy is an issue–for example, we must start buying parts, looking at scheduling resources, etc. I don’t think forecast accuracy is solely a sales person’s issue. Elimination of forecasts due to sales errors is. Thanks for the great comments.
(It’s always great to see you here!)