Yeah, yeah, I’m on my soapbox again, talking about AI. Lest, anyone think I’m not a fan of AI, nothing could be more incorrect. For years, I’ve been studying AI. In 2002, I cofounded a company that offered an AI tool focused on a couple of problem areas–manufacturing process control and prescriptive maintenance. The company was sold to a major software company, and has the tool has been updated and expanded since then, but still focusing on those two specific domains.
That experience helped me begin to understand the power and limitations of AI.
Our tool, as with more modern tools enabled people to see things they would never have been able to see before. For example, we could analyze millions of pieces of data, with thousands of variables, to give people insights into, in this case, their manufacturing processes.
It could suggest, all sorts of different solutions to manufacturing problems. Some of the solutions created “Aha” moments. Some were just plain dumb. And some were just plain incorrect. But the tool couldn’t distinguish between these, it offered our customers all the solutions, asking them to make a choice.
The other thing we discovered is the tool couldn’t distinguish between good and bad data, it was just data that it analyzed and determined recognizable patterns. So, for example, our customers would occasionally “flush out” certain manufacturing lines. For example, if they were switching materials, machines, products, or any number of things, they knew the results of any analysis would be meaningless. But the tool didn’t realize that. Every once in a while, a customer would forget to delete that data, our tool would treat it as valid and incorporate it into the analysis.
In some sense, we discovered AI wasn’t really very smart. It was just really good at sorting and showing us different patterns, based on the analysis. It’s ability to process large amounts of data, very quickly, brought our customers new capabilities to drive improvement and process yields, things which were unimaginable before.
But we discovered a few things that were really important to the successful implementation of the tool.
- You had to be very thoughtful about the questions you posed. As I discussed, the tool could provide hundreds of insights, some thoughtful, some wrong, and some just plain stupid. If you didn’t pose very thoughtful questions, you were likely to get huge responses that were meaningless.
- You have to be very smart to make meaning of the responses you get. Our customers had to have deep expertise in the manufacturing processes they were analyzing. Without this, they couldn’t determine what might be very powerful recommendations or recommendations that could create millions of $’s of garbage, or recommendations that may have been dangerous.
- This deep knowledge enabled our customers to interrogate the models, refining the insights provided by the tool, getting even deeper understanding of the problems they face.
- But if they didn’t have that knowledge, they didn’t know how to deal with the responses the tool provided.
- Some of the problems with current solutions is the sourcing and currency of data. Our customers, knew that 100% of the data came from it’s processes. They knew the source, currency, and accuracy of the data. But even with that high level of control over the validity of the data, they could be pointed in the wrong direction (for example things like the “flushing” of the lines could profoundly distort the findings). Making sure they were dealing with the right data, current data, and understood the source of that data, was critical to their success.
The tools that are currently “fashionable,” like ChatGPT offer interesting capability–but it has huge limitations. Fortunately, OpenAI recognizes many of these, providing warnings. In spite of that, we are regaled with hundreds of examples of bad recommendations, even dangerous recommendations.
Companies like Microsoft, Google, and others have had disastrous results from some of their generative AI solutions. In some sense, it’s not the fault of the technology, but since it is fundamentally stupid, it can’t distinguish between that which is good, bad, real, false, or whether it’s being manipulated.
And perhaps it’s not the job of the technology to do this. It could certainly do better, but perhaps it’s the responsibility of people using these tools very smartly, with some degree of skepticism, to filter what they get and how they leverage these tools.
There’s a lot of hype going on right now. There’s the saying, “People won’t lose their jobs to AI, they will lose their jobs to people using AI….”
That’s actually incorrect. A more informed insight would be, “People won’t lose their jobs to AI, they will lose their jobs to wickedly smart people, using AI very intelligently….”
So the key question is, are you one of those people?
Leave a Reply