“If you can’t do it yourself, you probably shouldn’t be using AI!” This was a brilliant observation by James Pursey in an outstanding seminar on AI in sales. The only modification I would make would be to say, “If you can’t do it excellently yourself, you probably shouldn’t be using AI!”
James gets at the root of so many of the issues we see about the terrible use of LLMs in selling, marketing, customer service. We all have been the subject of the most horrible LLM generated emails, social media conversations, AI generated posts. Many have fallen victim to misstatements/hallucinations resulting from the LLMs. Many are doing mediocre to bad research and call prep using these tools.
We have been wooed into a sense of complacency. These tools take away all the work we don’t want to do, all the tedious work, all the work we struggle most with. “I don’t have to struggle with outbound emails, I’m not good at it, so AI does it for me!”
What James and his colleagues were discussing, is to get the most out of these tools, you must have pretty deep expertise. You can’t simply develop a prompt, “Write me a prospecting email for these personas in these companies, about this issue…..” To get the best result, the best people have very sophisticated prompts they go through, to more deeply understand the issue, to more expertly refine and tune the responses. But to do this, effectively and efficiently, you have to have a high degree of knowledge about what you are trying to do, who you are trying to do it with, what works, what might not work.
The LLMs can be a tremendous help with this–but only if you have the ability to engage them in those discussions.
Let me give you an examples:
Years ago, I co-founded a AI company (we focused on neural networks), focused on improving process based manufacturing. We could look at a manufacturing line, collect millions of pieces of data, providing insights that had been previously unimaginable. We had several early clients (very large manufacturing companies) that believed they could save $100s of millions, annually, with the implementation of our tools. They were eager to start. The tool was pretty easy to use. Just dump in the data, the customer didn’t have to clean it up much, we could do that. Go grab a cup of coffee, perhaps lunch (in those days it took a little bit to crunch all that information), then come back and look at the results.
And this is where the problems started arising. Our tool, couldn’t provide a single ideal recommendation. We could provide a number of recommendations (dozens to hundreds). Which recommendation should the customer choose? There were recommendations involving changing the materials, changing how those materials were processed, changing the molds that were used, changing process times, recommending minor redesigns, and the lists went on.
Any of those recommendations could work, most produced pretty similar final results, some might be a little better, but which would the customer choose? In some sense, they could make no incorrect choice, but the issue was, which was the best for them?
And the customers struggled with making the choices, worrying whether they were doing the right things.
What we discovered was our customers needed to have deep problem solving expertise, and deep knowledge of a few key elements of their processing technologies. Without that, they struggled with making a choice.
In the short term, we solved the problem solving expertise. My developers were among the best problem solvers around (which is why they could develop this tool). We could send one of them to the customer, for very large fees, and they could help the customer make the choices of which solution to select and put in place. But for a small company, that was unsustainable. Every engineer I shipped out, slowed our development.
Recognizing that the most difficult part of using our technology was not the data or using the tool itself, but the problem solving required to assess the best solution for the customer circumstances, we started qualifying customers based on their problem solving expertise. At the time a reasonable surrogate for this was any company committed to Six Sigma. We knew their black belts had the problem solving knowledge and competence to leverage our tool with highest impact.
We see the same thing playing out in our early implementations of AI and LLMs.
Those with low knowledge and capabilities, have no way to evaluate whether they are getting the best from the LLM. To them, everything looks good, even though, based on what we see them doing it is wretched!
This afternoon, James and his colleagues spoke about how they use LLMs. Their experience aligns with how I see the best using them. They use LLMs less for providing the answers, but in helping them generate the best answers/approaches. They would go through series of prompts, “Here’s a scenario, what are 5 ways I might deal with it? What are the biggest 3 flaws in each of those approaches? Where is the customer going to struggle the most? How might our competitors respond? How do I deal with these issues? Given what we’ve done, is there a completely different approach we might take……..?”
They use these tools as thought partners, as debate partners, as partners that can help them generate new ideas. But to do this, getting quality responses, each of them have great expertise in what they are doing. They can identify the flaws, the hallucinations. They can refine the prompts to improve and more narrowly focus the answers. And at the end, they may say, “Give me a prospecting email for this kind of persona, in this type of business, focusing on these issues….” Then they take that draft, enhance it with their experience of the customer, markets, situation, send it and get great results.
So what does this mean as we look to leverage these tools with greatest impact? Some thoughts:
- We need to continue to focus on improving the expertise of our people. Not just in our products, but in our customers, their businesses, and how they recognize the problems we solve.
- We need to improve our people’s business and financial acumen, because that’s the language of the decision makers, in the complex B2B buying decisions.
- We need to develop our people’s curiosity. Both in how they engage customers in talking about their problems and dreams, but also to enhance their ability to do clever prompt engineering to get the most out of these fantastic tools.
- Hand in hand with curiosity is critical thinking? We need to engage our customers in deep, personalized discussions of what they face, we need to be able to carry on the conversation. We leverage the same capabilities, in working with the AI tools to help us plan what we might do with those customers.
- We need to develop our people’s problem solving/project management capabilities. This is where our customers struggle and we are, perhaps, the best at helping customers navigate the process. And with this knowledge, the AI tools can help us tremendously and help automate much of what we do–but only if we know how to do it.
- We need to conduct high impact collaborative conversations with our customers, helping them learn, grow, change. If you think about it, the characteristics of these conversations are very similar to sophisticated prompt engineering. What a great learning tool, we can start practicing with our LLMs.
- Finally, we have to recognize the things that LLMs and AI can’t t do. Then we have to make sure we are expert at doing those things. This is where the true power of the humans and the technology come together.
James was very polite and proper in his statement, “If you can’t do it yourself, you probably shouldn’t use AI.” One would expect this from a Brit (sorry, yeah I know it’s a stereotype, but hang in there).
I’m a little more crude: Idiots using AI will produce crap at the speed of light!
We have a choice. If we want to leverage the real power and promise of these tools, we have to develop the capabilities of our sellers, marketers, managers, and leaders. We have the opportunity to amplify what we do and how we do it, connecting with much greater impact in every customer engagement.
Afterword: This is the AI generated conversation this article. I’m always amused at how these AI characters talk about themselves and AI. It’s always AI as something separate from them and they are making comments. So there’s an underlying irony in this conversation.
Having said this, the discussion always brings up ideas I hadn’t considered in the article. Enjoy!
Leave a Reply