My feeds are filled with all sorts of stuff on Generative AI and LLMs. “All sorts of “”Experts” offer miracle cures and prompts to “give us the answers, doing all the work for us.” All of what I see from sellers leveraging ChatGPT and similar tools are well written, most are unimpressive. All these outreaches are similarly ambiguous and non specific, hence usually irrelevant.
As I look at how too many sellers are using Generative AI, I wonder if we are using it to its full potential? Or are we misusing them, tragically?
Before I dive into this discussion, I do recognize we are in the very early stages of learning how to exploit these tools for greatest value. The tools themselves are evolving quickly, but having gone through a several of these technology disruptions in the past, I recognize we usually find the mediocre to poor uses first, but as we gain experience we improve our understanding and start to leverage the technology to its full potential. And what we see now is no different, most of what we see is dull, uninspired, and often very inaccurate. We will move beyond this.
One of the mistakes I think too many make in the use of these tools is that we are using them to give us the answers, rather than helping us discover the answers.
Let me dive into what that means.
To do this, I’ll share an experience starting about 22 years ago. I was fortunate to be part of the founding team of a very early AI start-up. We had a remarkable technology. As we started the company, we realized this technology could be used in dozens of industries applying it to hundreds of different problems. We had a major European Telecom company using it for customer care, retention, renewal and growth. We had a pharma company using it for drug discovery. We had several manufacturing companies using it for improving manufacturing processes, quality, and yields. We had airlines using it for predictive maintenance. We even had about half a dozen of the F1 teams using it for deep analytics on race performance.
While these were all exciting applications, we realized we had to focus. So for our first few years, we focused exclusively on manufacturing process control and prescriptive maintenance.
We started working with major manufacturers, analyzing their manufacturing performance–yields, quality, scrap, throughput, supply chain management. We could take millions of disparate data points, providing insights that were never before possible. We showed our customers how they could save 100s of millions in more effectively managing their manufacturing processes.
But our customers struggled with using the tools. Actually, it wasn’t “using the tool,” they could dump any and all data into our system and we could leverage it. The challenge was in understanding and leveraging the results. We were at the early stage of AI, so we didn’t provide the customer the “answer.” Part of that was limitations in the tool, part of it is we wanted to give our customers the ability to choose the best answer. We could show them all sorts of alternatives. We showed “patterns” that drove great results, patterns that drove failures. They could look at alternative in changing, choosing the alternative that best fit their goals. Sometimes, it was optimizing materials costs. Some changing the settings in their manufacturing machines. Sometimes the best choice would have taken a long time/expense to implement, so they often would consciously choose the second best.
Early on, while we could give phenomenal insights, our customers struggled. It was not in using the tool, but in making choices or refining and narrowing the analysis. To help the customers get up to speed, I started “shipping” an engineer with each license. These engineers helped customers understand the choices they were presented with. They helped them refine their queries (the term prompt engineering hadn’t been invented), to give them better choices. They helped them figure out how they might choose.
Even though the customers were paying for those engineers, many of them came from my development and engineering teams. Sending them to our customers was adversely impacting our product development.
As we studied the problem, we discovered that while we provided great capabilities and insights, the customer struggled in their problem solving, critical thinking. They were tremendously smart people, but understanding the data, understanding the choices, figuring out which answer was best for what they were trying to achieve required skills they didn’t have. Even something as simple as refining their queries, to come up with fewer and better choices required an analytic skill that many of our customers didn’t have.
To address this challenge, we refined our ICP–we had a huge amount of interest and demand, but we realized we didn’t have the resources to support all of it. We started looking at customers that had demonstrated skills in problem solving, understanding complexity, critical thinking. We discovered that companies with a strong commitment to six sigma, lean and agile were more likely to have those resources in place. They could help the manufacturing engineers assess and more easily choose the best alternative.
Let me go back to this last point. All our customers were wickedly smart–but they were weak in the skill sets critical to using our tools to maximum impact. People who had strong backgrounds in problem solving, critical thinking, lean or agile, approached the problem solving process in a way they could get great advantage out of the tool.
Let’s fast forward to today.
First the state of the art/technology has advanced far beyond where we were twenty years ago. In many ways it is good, but in some ways, it’s bad–or at least how we are leveraging it is bad. Rather than giving us choices, it pretends to give us the answers. For those who blindly accept the answers they may be very wrong. AI doesn’t know what the right answer is, it doesn’t know what the best answer is, but we let it pretend to do so. Sometimes it’s helpful and sometimes it’s not–but it may take time to realize this.
The best users of AI technologies recognize this. One of the highest demand job categories in all disciplines/industries is “prompt engineering.” Prompt engineers command $100Ks in comp. They have deep skills, not just in how to use the tools (in most cases it’s trivial–after all, even I can use ChatGPT). But it’s how to get the best use out of the tool.
The companies exploiting GenerativeAI and LLMs the best are using the tools to help discover the best answers, not to give them the answers. Prompt engineers constantly refine the prompts/queries. They understand the limitations of the tools, for example limitations in the data leveraged, perhaps limiting the data leveraged, assessing the quality of the data leveraged. They recognize how the tool might provide the wrong answers. They constantly refine their prompts, they guide the model responses to be more targeted to the context and issues they are trying to address. Rather than a generic prompt, they provide very specific contexts and situations for which they seek answers. They explicitly exclude data that may provide irrelevant or erroneous answers. They refine the prompts to maximize model performance. They are constantly on the lookout for misinterpretation and errors.
Leveraging these tools to their greatest advantage requires wickedly smart people and different skills.
Even with this, we are seeing that end users, leveraging outputs the prompt engineers may have developed have to be wickedly smart–not blindly accepting what the answers it gives them, but assessing the answers in the specific context/situations they are dealing with. As we look at doctors using these tools, they typically have deep expertise in the diseases/therapies they leverage the tools for. If I go back to my old customers using our tool for manufacturing process control, the people getting the best out of the tool have the greatest expertise in the manufacturing technologies and specific issues they seek to address.
The best users and uses of these tools tend to have the deepest expertise in the problems they are trying to solve. They use the tools less for giving them the answers, more for helping them develop the answers.
I’ve been listening to a lot of interviews of the leading thinkers in AI and the developers of those tools. Their message is similar.
As I look at the way too many sellers are leveraging these tools, I worry that we are using them in the worst ways possible. We are not providing the critical thinking, problem solving capabilities that create the best results. They aren’t steering the tools to the issues and contexts most relevant to what they face. They aren’t testing alternative models with the tool, looking for different answers. They aren’t looking to see if the answers make sense.
Instead, we see expert advice on writing a prospecting email to a CRO in the technology sector…..
I’m a huge fan of the LLMs. I use them every day (In fact I used them to better explore strengths and weaknesses in high impact prompt engineering). I use them as a debate partner, to help me think differently, to show me flaws in my logic, to help me identify something I may have missed. I use them to help me find answers but never to give me answers.
These tools are so powerful. I worry about the “taint” they may get by the absurd mis-use I see dominating my feeds daily. When I talk to executives, they see the mindless stuff inflicted on them. We’ve all developed sophisticated “this was generated by AI sensors,” that help us ignore the outreach inflicted on us. Too many are asking, “Is this what we get?”
These tools are capable of so much more. When we use them well, we are capable of so much more.
Leave a Reply