Skip to content

The Most Used – Useless Metric In Sales!

by David Brock on July 16th, 2010

For years I’ve been amazed by the number of very smart sales people and leaders who have a blind spot in forecasting.  One of the top issues CEO’s, CFO’s, and even Chief Sales Officers have is forecast accuracy.  One of the most used forecasting methodologies based on a “weighted revenue” approach. 

This approach takes the sum of all opportunities in the pipeline, multiplying the revenue for each opportunity by a probability factor.  This makes sense statistically, it’s called expected revenue.  For example, if you have a $100,000 sale and a 70% probability, the expected value of the sale is $70K — we know this from statistics.  So what’s the problem?

The root of the problem is really in the way most organizations assess the probability factor.  For most organizations, the probability assigned is based on where you are in the sales process.  For example, if you have qualified the opportunity, you might have a 25% probability; after you have completed discovery, you have 50%; after you have submitted a proposal, you have 75%; and after you have closed, you have 100%.  Makes sense, as you go through the sales process, presumably you are improving your chances of winning the business.  All perfectly logical—all perfectly meaningless.  Yet, virtually every CRM system, virtually every forecasting methodology relies on this approach.

Now you say, “Dave, who are you to say this is meaningless, after all it’s been the cornerstone of our forecasting systems for years.  How could all these smart people be so wrong?”  Well, I really don’t know the answer to this, but it is still perfectly meaningless.  The way we assign the probability factor is completely wrong.  At best, we are measuring progress through the sales cycle (though I tend to doubt this), but we aren’t measuring the likelihood of a customer making a decision to choose our solution over the competition.

Consider these arguments:

Let’s assume we are competing against two other companies.  Each company uses the same CRM system, and they haven’t modified the “out of the box defaults–choose your favorite CRM System–, Oracle CRM, SAP CRM, MS Dynamic, whatever.  All assign probabilities based on progress through the sales process.  Let’s assume we’re all completing our proposals to the customers and our CRM systems tell us that we are now at a 75% probability of winning this $1 Million deal.  Each of us (we and our 2 competitors) are committing to our managers and they are committing to their managers the expected value of the deal at $750,000—we are all saying that we have a 75% probability of winning.  Now I have to admit, I struggled through freshman statistics, but I did learn that 3 competitors can’t each forecast a 75% probability of winning a single event.  I learned that, if it was evenly weighted, each had a 33.33333% chance of winning.  I learned the sum of all the probabilities could never be over 1 or 100%.

Many sales executives call me asking for help in improving sales performance.  We talk about lots of things, I generally ask to look at their sales process and pipelines.  I see they are using the same methodology (let’s stick with our 25-50-75-100% example–you can substitute your own numbers if you want).  At some point in our conversation they say, we aren’t winning enough business.  I generally ask, “What’s your win rate—what percentage of the proposals you submit do you win?”  They are always embarrassed, they always say, it’s too low–I push for an answer, they give me a number.  It might be 60%, it might be 50%, or it might be 40%.  Generally, I believe them, even if it is just a “feeling” they have.  The point, however, they are using a 75% win rate at the way they are forecasting to the business.  See, if their system says, their weighting factor for all proposals presented is 75%–implicitly they are saying, “We win 75% of the opportunities we submit proposals on.”

Let me use one final example to show how meaningless this approach is, at the same time providing clues for an alternative approach that is more relevant.  Let’s say we are pursuing two different sales opportunities.  With both, we have completed proposals, both for $100K.  One is with a long time customer.  This customer likes us a lot, we are bidding a capacity upgrade–one that he really needs, one he has budget for and had planned spending the money.  There are no problems, the customer has said he wants to do business with us.  The second is with a prospect we have never done business with–well we did years ago, until we made him so unhappy that he chose our competitor in the last deal.  He reluctantly agreed to let us propose, though he far favors the competition.  Our solution doesn’t provide the performance levels of the competition and we are only 25% more expensive than the competitor he is currently doing business with.  Our forecasting system would require us to forecast both of those at the same probability–75%–or expected revenue of 75K for each opportunity.  On paper, both look exactly the same, and exactly as likely to produce business.

But in reality, we know the probability of winning the first deal is significantly higher than that of the second deal.  We have a justified solution that fits the customer requirements better than any others, and a customer that is very biased to doing business with us.  In the second case, we have everything going against us–at least from a customer perspective.  Yet our “weighting” process causes us to look at these as the same, presenting the same likelihood of winning the business to management and others.  This forecast, because it is so late in the cycle (after all we are 75% of the way through) sets all sorts of wheels in motion.  Procurement may be starting to buy parts for both sales, manufacturing might be scheduling the products into their manufacturing cycle, the CEO is presenting these shareholders as pieces of business we are likely to book—-OK, I went off the deep end and exaggerated a little, but you get the point. 

In reality, most sales people would probably say we are only going to win the first one and are highly unlikely to win the second deal.  Most sales people would “assign” a very high probability to the first deal and a low one to the second deal.  Their rationale would take into account things like the solution fit, the urgency of the customer, the business justification, the past relationship with the customer, and several other factors—things that are relevant to how customers make decisions, not based how far we are through the sales process.

This is the fatal flaw with the way we assign probabilities and do weighting currently.  We make an assignment, virtually independent of any consideration of what the customer thinks of our competition and us.  Instead, we use an artificial measure of what activities we have completed–regardless of whether they have had a positive or negative impact on the customer’s perception of us.

It’s no wonder why our forecasting is so poor, we are using criteria that are irrelevant to the customer and the decision they are making.  Yet virtually every organization I have encountered has used this approach in their forecasting, and virtually every CRM system, out of the box, encourages the same strategic error in the way they have set up their systems.  The approach and the metric is not only meaningless, it is misleading.

It’s time we change the approach to assigning probabilities and weighting our forecasts.  Weighted forecasting can be very good, but it has to be built on valid assumptions and strong foundations.  What if we changed the way we looked are assigning probabilities to consider things like urgency of customer need, our solution fit, the business justification, our current and past relationships with the account, the process the customer will use to make a buying decision or several other things.  If we considered these, we’d have a much more accurate view of what our business will be–opportunity by opportunity and across the entire pipeline.  We’d have a set of numbers, our management team would have more confidence in, and that we believe we can deliver.

It’s an easy shift that can have a profound impact.  I’ve touched on only a few items here, but the real secrets are in a short white paper and worksheet, I’ve written.  Send me an email —unfortunately, you can’t put on the last e for excellence.  Ask me, “How can I increase my odds of winning?”  I’d be delighted to send you the white paper.

From → Uncategorized

  1. mark mccarthy permalink

    This was brilliant. Frustrated for years with this oh so soft forecasting belief. Thanks David

    • Mark: Thanks for the comment! I continue to be amazed with the number of executives that continue to rely on this as the cornerstone to the forecasting process. Additionally, it’s crazy that many of the tools vendors (CRM folks) perpetuate this by embedding it in their systems. Regards, Dave

  2. Hi David,

    I’m in customer service, not sales and really appreciated this post.

    Basically, we’re measuring based on our business processes when we should be looking at this from the customer perspective.

    As in so many other things, this makes perfect sense and would definitely lead to more accurate forecasts.


    • Eric, thanks for the great comment—maybe it takes a “non-sales” guy to present with such clarity. We ought to be looking at things from the customer perspective, not just our progress through our process! Thanks for the comment! Keep joining the conversation. Regards, Dave

  3. This is spot on. As a past sales leader for a technology company, and not a numbers guru btw, the stage in the pipeline we focused on was the demo stage – which for us – was in the LAST third of the process. We knew an early, eager demo was a deal killer. We also knew that if we made it to the later demo stage – our % of getting the deal tracked at 85%. Those deals are the only forecasting numbers we used. And then we added intuition to each deal, based on what we knew to be degree of certainty or doubt.

    Pipeline was an aggregate for state of the business – how healthy was new business oppty/how healthy was each sales person new business oppty.

    • Marilyn: Thanks for the comment. Actually your example is great, in several situations with our clients we have seen similar things—very high correlation of winning deals with a specific (or set of) activities. This comes from a real understanding of why you win and lose, why your customers buy, etc. Thanks for the great example and adding to the conversation!

  4. David Olson permalink

    The finance side of this is huge. I believe that in many organizations the finance departments regularly and arbitrarily discount sales forecasts. I would add, rightfully so, because the track record of meeting those projections is so abysmal.

    Even more frightening the executives override the discount from finance because they have to believe in something relative to sales growth and it is so unpopular to appear to allow finance to run the show. Besides, sales folks are “fun” and finance folks are “stogy”. Sales folks sell the “facts” and finance folks generally just state the “facts”.

    This is not a good recipe for owners, customers or employees!

    I look forward to reading the White Paper and have requested it directly. Thanks for another great post. Dave

    • David, thanks for the comment. The impact of this flawed logic on the organization can be tremendous! As I point out, at some point these create expectations and actions within the organization–committing resources, making investments, etc. What amazes me is that few seem to have recognized how bad this thinking is and persist in the old ways.

  5. This is excellent analysis David. It puts a whole different perspective on “doing the same things and expecting different results” – kind of using flawed assumptions.
    Kudos my friend. Dan Collins

    • Thanks Dan, it’s amazing though, that so often we keep doing the same things over and over and expect it to change. Regards, Dave

  6. Great points, as ever. Especially agree on flaws inherent in using “an artificial measure of what activities we have completed–regardless of whether they have had a positive or negative impact on the customer’s perception of us.”

    IMO knowing, across each step of the sales process, whether or not the customer’s derived value from their conversations with a Rep (as evidenced by whether or not they went on to consume details they requested in response to questions they asked or issues they raised) solves part of the problem you identify. For instance, while there’s no guarantee someone will buy from you if they read Terms + Conditions they’ve requested, there’s a 100% chance they won’t buy from you if they don’t read offered T&C’s. Trust this adds some value.

    • Thanks, as always for the comment John. Measuring the liklihood of a sale based on activities completed is really pretty meaningless. I may disagree, though, just because a customer has consumed the details you have provided, is not necessarily an indicator that your liklihood of winning the business has increased–it seems to me it just means they have read what you have provided. There needs to be a confirmation step, ie, “they’ve read what you have provided and have agreed it is superior to any other alternative and fully meets their needs, requirements, etc.”

      Thanks for joining the conversation. Regards, Dave

  7. Scott Carey permalink

    Excellent article. The conclusions are right on target and address what I consider to be the art of forecasting. The folks best equipped to forecast the probability are those closest to the sale, using many of the new metrics suggested.

  8. Dave,

    Thanks for attacking and getting some sunlight on what might be the dumbest sales assumption of all. I tired of arguing with clients about the sales stage having little or nothing to do with odds to close. From now on, I’ll just send them a link to this post.


    • Don’t forget to send them links to your related posts as well! They were right on. Thanks, as always Todd!

  9. Terry permalink

    I was astounded when my former company expanded its use of the CRM system to utilize this metric in the project pipeline. The math made absolutely no sense to me at all. What made even less sense was the budget and sales forecasting based on those metrics. In that industry, it wasn’t uncommon to have 99% confidence (real not imagined) that you had won the business based on interaction with the customer, only to lose it at the last minute due to an intangible or unforseen move by a competitor.

    I was often questioned why when I had a project in the system past the propsal stage why I had it listed at 25% or 33% instead of over 75%. My answer was simple, “This is the actual probability of us getting this job.” In most cases given the economy we didn’t lose the business, the project was either cancelled or put on indefinite hold.

    According to those metrics you could have tens of millions in forecasted revenue, but ask any one of your sales people and their realistic estimation would be significantly less. I personally believe that the type of metrics noted in the article give an inflated figure and a false sense of security to upper management. If major business decisions are made based on this metric your business is in serious trouble.

    • Terry, thanks for the comment. The problem really isn’t with the CRM system. Most CRM systems allow you to override the probability, others have rich analytic techniques for developing it. The problem is the “blind application” of these useless techniques–both by sales management and by vendors not advising you how to do it correctly. Thanks for taking time to add to the discussion. Your comments are right on!

  10. Phil Morgan permalink

    You make some valid points about the way simplistic forecasting based on sales process stage can be grossly inaccurate. However, the problem does not lie in the statistical approach but in the weak sales processes used by many organizations.

    Constant re-qualification of opportunities and sales process stages that are linked to increasing levels of buying commitment from the client ensure that there is a better statistical link between the stage of the sales process and the likelihood of winning a deal.

    In the example you give of two similar opportunities, the second deal should not be on the forecast, solid qualification of the opportunity should have arrived at a “no bid” conclusion and the sales exec should be focusing his/her effort on deals they have a better chance of winning.

    • Philip: You are absolutely right regarding the sales process (The white paper has this as one of the cornerstones to not only great selling but to forecast accuracy). Too many organizations have no process, bad/outdated processes, or don’t rigorously use the process they have in place. (Shame on sales management!). Strong well executed sales process not only increase odds of winning but reduce variability in the forecasting process.

      You may also want to search on sales process in this blog. It is one of my “soap boxes,” I can see we both share it. Thanks for the great comment. Keep joining the discussion here. Regards, Dave

  11. This is spot on. Forecasting (accurately or inaccurately frankly) is such a pervasive problem for most organizations and you have hit on some key root causes for inaccuracy. While CRM’s were to be the Holy Grail for forecasting, the reality is not proving to be such. Great points and I look forward to reading your white paper.

    • Leisa: Thanks for joining the discussion. I think poorly implemented CRM tools are the real issue. If we were to point fingers, it would be both to the vendors and sales management. The vendors provide standard templates, sales processe, and probabilities that are meaningless to misleading—carryovers from the same flawed thinking outlined in the post. They would best serve customers by leaving these blank and advising customers on how to build their accurate process and leverage probabilities more appropriately. The second is management that implements these tools blindly–I think this is a bigger problem. We need to really adapt the CRM systems to reflect our own processes and best practices. Too many organizations don’t to a great job of this.

      The CRM vendors, on the other hand, are doing some powerful things in terms of incorporating great analytics, that when properly implemented offer sales management great insight into the business and improve forecast accuracy.

      So we are caught between a vendor rock and a hard place — as we have been with these tools for some time. Great potential that is not fulfilled. It’s an opportunity for vendors to create great value for their customers!

      You may be interested in a post I wrote yesterday, continuing the discussion on Forecasts as “informed guesses.” Thanks for joining the discussion. I hope you return and continue to comment! Regards, Dave

  12. Brian Seide permalink

    A primary reason why executive management in most companies continue to give significant weight to the insignificant process of “weighted revenue” as you explain it Dave is because very few of them have ever been in the trenches selling for their life. They think that they know what’s going on in their sales by occasionally visiting their customers with the customer’s assigned salesman, but we who feed our families by the sales activities we generate from our customers know that they don’t know. If they’ve never wept because they lost a sale; if they’ve never lain awake at 2am going over and over the sales call that went wrong; if they’ve never felt the overwhelming pure panic of what if they lose that customer, then they’re not going to be able to accurately forecast by intuition, and are obligated to follow the processes that those before them and those around them have always followed. Take the word of their soldiers in the trenches and stake their presentation to the Board of Directors on it? You better be one helluva leader and know that your soldiers aren’t giving you a line of bull. So the real reason forecasting is poor most of the time? Poor leadership. Just my two cents; I admit that I’m a narcissist.

    • Brian, thanks for the comment. I know at times there appears (and often is) a wide gap between management and line sales people. Most managers and sales people I’ve met are well intended. The pressure of day to day business and lots of other things impact performance at all levels. Top managers need to set a different example. People at all levels must take the time to think about what they are doing and whether it makes sense. Thanks for joining the discussion!

  13. Great post Dave and very recognizable. In almost all organization I have worked with the probability score is based on the sales stage leaving out a lot of other elements. I would actually say that with a proper scoring it will also help to determine when to continue with an opportunity or not. In case of the example you used of the two opportunities that each are on 75% with a proper valuation one might even decide to step away from the second opportunity. It will probably ask for an unequal amount of effort with too low win chances, so better focus on opportunities with a real win chance.

    • Peter, you make a great point. One might also say of those two opportunities at 75%, if people were using theirs sales process properly, it should never have been in the proposal phase and forecast at 75%. Regards, Dave

  14. Dave,

    Great article – as always, and a really good discussion developed here.

    To me the easy (but maybe oversimplified?) solution to this issue is two fold:

    The first is to ensure we’re not just basing the percentages on given stages but using the salesperson as a resource to their judgement on it (I appreciate the more salespeople and bigger the companies pipeline the harder this is to manage!), and many people have suggested as much already.

    The second is historical data, which doesn’t seem to have been raised above (unless I’ve missed it) The company, or the individual salesperson have a track record, and should be able to generate a rule of thumb figure in what the actually hit vs what they forecast; a kind of sanity check that can be used to determine if it’s been a good or bad month. Personally I see value in the “deal” ht

    • Steve: Thanks for both your comments. You’ll see I talk a lot about the issues you raise in the post: Sales Forecast: An Informed Guess. We’ll never have perfect forecasts, but we can improve them dramatically by using data and analytics to improve the “informed” part of the equation, and having a disciplined, consistent process (reducing variability) to include the judgment and experience of the sales person. Take a look at that article–I’d love your views.

      Also, it’s great to see you back! Just saw some of your tweets this morning—saw the 6 month picture, congrats! I’ve missed you, both here and on Twitter! Best regards, Dave

  15. Great article Dave – as always! This one has really promoted some great discussion too!

    My approach may be oversimplified, but I do (try to) justify it with the qualification that predicting the future is never an exact science anyway – so rules of thumb can really be of help here. To me this issue is really about sanity checking predictions, which we must accept will be flawed…unless you have the help of a bona fide crystal ball!

    My approach is two fold:

    1. Just as many people are saying, our systems have to be able to use the human element – the salesperson – who should have valuable input “we’re one of three” or “they’re not buying into our value and viewing us as more expensive”, and allow them to impact the percentage, rather than the out of the box “we’re at this stage so it’s 75%” approach (clearly incorrect, as you have so beautifully illustrated!)

    2. In order to improve the predictions of our salespeople, and therefore the company (as well as sanity checking), use historical data. If the machine chugs out “75% of 100,000” every month yet reality tells us something different actually happened, then it seems to me insane not to use this information. With a little more historical data retention, any model that we choose to predict can be tested, and should give at least some version of the reality. Different salespeople will have different rates, there are different ways to measure this and as I said initially, this is not an exact science (I actually favour a “deal” basis rather than “value” approach to the historical data) BUT it should mean that if we have 3 almost complete proposal, we shouldn’t be predicting 75% on each.

    …unless of course our salespeople and historical data concur! 🙂

  16. garry Scoble permalink

    I would like to add the following;-
    I think a lot of people how have issues with accuracy of forecast (generally Senior management) fundamentally have some hang ups from not understanding what’s actually going on when they generate the stat’s around the raw “data” from the sales guys out in the field.

    In the lottery are 1, 2,3,4,5, 6…. just as likely outcomes and any other set of numbers?

    If you throw 2 heads when tossing a coin is it still a 50-50 outcome on the next toss?

    If you win the first sale in the list, how does the Buyer at the next opportunity know not to give you the order?

    If you bid with several competitors for business how many times do you need loose, being 2nd choice, to book an order? Which customer order that you have lost do you book? and how do you tell them?

    When is a win 100%? Booked the order only for the customer to cancel, Supply issues, delivery issues, customer gone bust?…….

    It’s a Forecast – why do people think that it can be accrete? It’s a guess!

    Look at other uses of forecast like the weather – understand that in the UK they are doing very well, with all the maths crunching and Technology, global measuring infrastructure. They are nearly 40% accurate! (Does this mean that if they actually say the opposite of their forecast they would increase their success by 50 %!)?

    Look at what happens when you multiplying 2 numbers together with say a +/- 10% variation, i.e. 0.9 x0.9 = 0.81, 1.1×1.1 = 1.21 if you go with 0.81, 1.21 is almost 50% more, if you go with 1.21 0.81 is a third off! Do you really get more accurate by multiplying guess?

  17. What’s very clear form this blog is that salespeople don’t understand the theory of probability i.e. the proper theory of probability as opposed to multiplication. But then, the “salespeople” – even back at school – were too busy talking in class to have been listening!

    • Michael: Thanks for the comment. Unfortunately too much of what we do just proves that math works, but doesn’t provide any real insight into performance. Having said that, this problem isn’t just attributed to sales professionals–it seems there were a lot of other people talking in the back of stat class (or sleeping). Regards, Dave

      • the question is to what ends are the management of the business going to use the sales hocus pokus ( forecaste – with these manipulatons)?

  18. Dave,

    Another excellent post.

    Why not just ask the customer, since they are buying?



  19. Raghunath Iyer permalink

    Excellent insight. I am an engineer and participate in various sales reviews. I continue to be amazed at the weighted probability average methods that the sales guys use and more stunned to see CRM systems build calculators around this very flawed method. None of them seem to appreciate the real math behind probabilities. Your article is the first one that talks a very different language

  20. Dave, Hello
    Just now I ran across this post and am wondering: what are your suggested metrics in lieu of the conventional win probabilities?
    I tend to rely on weighted vs. un-weighted values in the funnel mostly to determine the ‘health’ of the funnel, and I explain:

    Let’s say we are beginning the 3rd and last month of the quarter
    Let’s also say we have $1M un-weighted opportunities expected to be booked (order intake) in the quarter
    With only 4 weeks left to the end of Q, I would expect the total weighted value of opportunities to be fairly close to that of the un-weighted
    If it is too far apart, it is an indication of an immature funnel and, again with only 4 weeks left to end of Q, indicative of a funnel in trouble
    This does not mean I use the weighted value as my commitment to management; typically the committed number lies between the two columns

    I do agree with you on the simple math: if 3 vendors are proposing, the probability of each’s opportunity would be ~33% but, how do you consider factors like competitive advantage, pricing, relationship?

    Looking forward for the white paper and your comments.

  21. David, one thing troubles me about the Odds to Win approach vs. the probabilistic approach as you advocate here:
    To implement this, one would need to replace the Win Probability/Sales Stage default selector in our CRM system with the Odds to Win Wizard, a 5 to 10 questions matrix score table, as you well suggest.
    Considering this ‘Odds to Win’ wizard would have to be executed per each sales opportunity, do you feel it to be a realistic thing to implement?
    I often deal with sales people who, while presented with something like this, will likely step off CRM and – overall – this whole thing will negatively impact CRM user adoption.

    Your thoughts?

    • Pablo: Thanks for continuing the discussion. Frankly, I believe the probability selector in most CRM systems is absolutely meaningless and ignore it, as well as advise my clients to ignore it for any kind of reporting. As we’ve discussed before, it can also be dangerous and misleading.

      Virtually every CRM system has the probability aligned with where the deal is in the sales process, so the probability measures the wrong thing, so as implemented is totally useless. I don’t recall the CRM system your organization is using, but if you are using the default probability in any way, the number misrepresents the likelihood of a deal closing. It is just measuring that someone is 50, 75. 85% of the way through the sales process and not a propensity to buy.

      The Odds To Win approach focuses more on a propensity to buy approach. It is actually very easy to take that approach, embed it into a CRM system and have it automatically update the probability field. I’ve several clients that have embedded it into as well as the old Siebel. I see nothing that would prevent someone from embedding it into other CRM systems. In the very least, you can have people manually enter a number, based on a worksheet (other clients have done this).

      The real value of the Odds To Win approach I’ve outlined is not the total number projecting a probability of winning, but the value it provides as a deal SWOT analysis. When combined with a sales process checklist, it helps sales people improve their deal strategies and drives higher win rates.

      If your sales people use this as a reason not to use CRM, I suspect it’s not this, but some other reasons–primarily they see it as a management reporting tool not a tool to help win more deals, more quickly.

      But going back to the original discussion, probability weighted forecasts. In every sales organization I’ve managed (some quite large) and in every client, we never use a probability weighted pipeline for any kind of forecasting. Why any manager continues to use something that has so little meaning and can be misleading is beyond me.

      Hope this is helpful. Glad to discuss this privately if you would like. Regards, Dave

      • Thanks for the prompt reply, David.
        In my company’s implementation of CRM I now actually have the option of completely removing the Win Probability/Sales Stage dropdown menu and put, in its place, a Wizard akin to the ‘Odds to Win’ table. It’d give me – at the end of it – not only a SWOT analysis per opportunity but also a numeric ‘OTW’ (Odds to Win) result.

        I keep talking about weighted values; Why?
        Eventually a forecast commitment has to be given to executive management; and that number cannot be the sum of your entire forecast.

        When looking at forecasted opportunities, I generally take a binary approach if the opportunity is scheduled for the current quarter:
        I feel that, with <12 weeks for expected deal to happen, it is fair to expect sales person to be able to commit in a binary fashion.

        When the timeframe is beyond the current quarter, however, the above binary approach becomes unrealistic; one has to consider a technique that will allow committing a $ value to forecast while also accounting for the likelihood of the deal materializing – the Odds to Win, if you will – in lieu of Win Probability.

        I am thinking of an hybrid approach where the weighted value for such periods will be calculated as total value (un-weighted) multiplied by OTW factor will give my new 'enhanced' Un-Weighted value.

        Ideal? Not sure.

        More thoughts? I will welcome these 🙂

  22. Roy Gee permalink

    very interesting article, i work in the Construction industry (energy efficiency projects) where all projects are tendered by multiple companies. we originally used two probabilities a ‘Go%’ and a ‘Get%’

    Go = probability project will proceed
    Get = probability business will win the work

    The value of the percentages are adjusted monthly by the BDM in the rolling 12 month forecast.

    one item that is always debated among senior management that i have rejected is the idea that the number of tenderers reduces the win %.

    Hence senior management have decided to eliminate tenders that have been distributed to more than 4 companies. They have put a 3rd matrix to capture tender numbers (ie 4 tenderers = 25%)

    my personal opinion is that the number of tenderers does not matter and this matrix should be 50% (our business v’s all the others) until you are the last man standing in which case 100%

    unless you begin to go deeper and assess each of the oppositions capabilities and previous relationships with the client as you suggested in your article

    while the 25% may be relevant for the clients probability formula of who is likely to win the project. For our company i believe it is us v’s the rest (50/50).

    what is your opinion? i understand that you touched on it briefly within your article.

    Also i would be very interested in reading your white paper on this topic, where can i source the document.


    • Roy: You’ve covered a huge amount here, I’ll answer some here and some in a private note to you. I’ve had a similar discussion with a number of others in the construction industry. The “Go” “Get” concept seems very powerful for the industry. In other cases, I would be inclined to say that a project should not be qualified until you know the project will proceed. I think there are is some uniqueness around the bid/tender/funding process in the construction industry that makes the Go/Get assessments interesting.

      Having said that, everyone has the same “Go” probability, so from a competitive positioning point of view, the Go probability is irrelevant.

      I tend to agree with you about the assessment based, blindly, on the number of tenders. I think you may end up disqualifying a very large number of opportunities you might otherwise win. Having said that, I don’t believe it’s “Us vs the Rest” either. The assessment needs to really be driven based on actions and attitudes of the customer, commitments they have made, etc. This whole thing becomes very complicated in a bid process, so there are some other things you have to look at–I’ll discuss in my note.

      I think we also have to look at why we are using probabilities in the first place. In the sales process, there are far better indicators of our competitiveness and what we need to do to maximize our ability to win. In forecasting, we have to be very careful how we apply these weighting techniques.

      Again, the construction industry is a little different than many other industries, so there are some complications many sales people will never see. Thanks for the thoughtful comment. Will send you the white paper along with a further discussion in an email. Regards, Dave

  23. Gareth Crisford permalink

    Stumbled on your article whilst googling a better way to forecast sales. Love the logic you bring in the article, and I’ve ordered your book from Amazon as a result. Thanks in advance.

  24. Hi Dave,

    Recently my co-author Dan Schultheis and I completed a book that is a follow up to our “Willing To Buy; A Questioning Framework for Effective Closing”. The new book, a companion piece, is called “The Willing to Buy Coach”. In the second book and purely by coincidence we talk about the same concern on CRM predictive analysis you do in this blog. We offer a solution using the overlay of our “framework”. We plan to reference your blog to illustrate the common frustration with the “three companies all predicting the same thing” conundrum. Even though we believe the similarities in our book and your blog are coincidental and represent a commonly held concern, we plan to mention you in positive terms and reference your blog. Can you please send along an email to the above address authorizing us to reference the blog to reinforce your point. In return I’ll be happy to send along a copy of the book when available. Let me know what, if anything, you need before you can authorize the reference to you, your blog and analytic thinking in this area.

    Best regards,


  25. Dave,

    Forget that last post. It’s been a while since we finished the manuscript. In reviewing the section I mentioned we quote your blog directly. Slipped my mind. We do credit you completely but would like to have your permission to do so. If you can send along your email address I’ll send you an excerpt so you can see how we handled it.

    Sorry about that.


Trackbacks & Pingbacks

  1. Tweets that mention The Most Used – Useless Metric In Sales! | Partners in EXCELLENCE Blog -- Making A Difference --
  2. Piling On The Pile – Still MORE On The Most Useless Metric In Sales – Todd Youngblood's "SPE" Blog
  3. Piling on! More on the Most Useless Metric in Sales
  4. The Most Popular But Useless Metric In Sales | AllformZ BI Blog
  5. The Sales Forecast, An “Informed Guess” | Partners in EXCELLENCE Blog -- Making A Difference
  6. Metrics–The Secret Weapon Of Sales Managers?? | Partners in EXCELLENCE Blog -- Making A Difference
  7. Games Sales People Play — The Challenge Of Activity Metrics | Partners in EXCELLENCE Blog -- Making A Difference
  8. The Problem With Forecasting | Partners in EXCELLENCE Blog -- Making A Difference
  9. The Problem with Forecasting | SOLDSM
  10. Sales Forecast 業務預測 | Amigo's CRM Notes
  11. Piling On The Pile – Still MORE On The Most Useless Metric In Sales – YPS Group Inc.
  12. 5 Reasons Why Salespeople Hate CRMs So Much - ProSellus

Leave a Reply

Note: XHTML is allowed. Your email address will never be published.

Subscribe to this comment feed via RSS