We have become enamored with all the things AI can do to help sellers. It provides the potential of eliminating all sorts of tasks that, somehow, seems to fill our time. Updating CRM, drafts of prospecting letters, drafts of proposals, research on industry, market, customer issues. The list of things that filled our time goes on.
And as both our understanding of AI and AI, itself, develops, there are things that AI can’t do. But they are critical to our jobs.
Can you do those things? Are you doing those things with the time that AI is freeing from other tasks?
Without people filling these gaps (chasms) regardless how well we leverage AI, we will fail! We will fail our customers, we will fail to achieve our goals, we will fail our people!
What are some of the things that AI can’t do that is so critical for we humans to focus on, both sellers and sales leaders?
The first, most glaring area is “Emotional Intelligence and Empathy.” Business, including selling, is about people interacting with and engaging with people. How are we connecting with people and how they feel about what they are doing? We know that at least 60% of planned purchases end in No Decision Made. And the underlying reasons for this have nothing to do with the facts, data, and capabilities of our solutions. It is all about how people feel about the change initiative. Are they doing the right thing for the organization? Is it the most important area of focus at this time? Are they doing the right thing for themselves and what they want to achieve. We’ve seen all sorts of research on lack of commitment to change, FOFU, sensemaking, and others. These are all deeply human issues that can only be addressed by humans working with humans.
Underlying all of this is building trust and relationships. We build trust with each other, we betray trust with each other. But trust is core to everything we do with our people, each other, and our customers.
AI, is very weak at complex problem solving. When we look at complex problems, they are never static. Perhaps the best way to visualize problems in the real world is using an amoeba. Amoebas are constantly changing, shifting their shape, shifting directions. We can never define what an amoeba looks like, because it’s always changing.
And that’s what problems in the real world look like. They are constantly evolving. As we learn more about the problem, things change. The more people involved, the perspectives of the problems, their impact and what should be done is constantly changing. Ambiguity underlies everything we see in complex problem solving. And regardless how many times we solve certain categories of problems, each is nuanced and different.
In helping customers understand and address their change initiatives, we have to be sensitive to the constant changing and nuances that underlie every initiative. We have to help our customers and our people successfully navigate these issues.
And none of these problems exist in isolation. Each exists in a larger strategic context–unique to each customer. They exist in the context of the overall business strategies, market dynamics, competitive landscape. They exist in the context of organizational cultures and values. They exist in the context of the expectations of each other’s customers, employees, partners, shareholders, and communities. And these constantly evolve.
What about innovation, creative and critical thinking? While AI can help us see patterns and things that were very difficult or impossible to see in the past, we are faced with , “what do we do about them?” Or how do we take very disparate and disconnected ideas, putting them together to discover new and novel approaches, things that no one imagined, yet when we see them they become obvious.
Whether grand scale disruptive innovation or hyper localized ideas, “we could do this differently,” it is human imagination, curiosity and creativity that underlie every change we make.
Then we start thinking about leadership, motivation, people/organizational development. How do we inspire and create a vision for what we do and where we go? How do we align different people with varied interests, around a purpose, mission, and goals? How do we engage each individual, coaching and developing them to achieve their dreams?
And then we look at cultural nuances and sensitivities. This starts with our organizational cultures–what we value, our purpose, what we achieve, and our collective “why.” This changes from organization to organization, and over time. Then we expand our view to look across the world with differing cultures, value systems, and ways of working.
AI is great at responding to what we ask it to do. But so much of what we do is “unscripted” or adhoc. Yet we have to be agile enough, imaginative enough to understand and respond to these things.
I’ll stop here, but as you think about how we work individually, organizationally, with our customers, and with others, there are so many other things that AI isn’t very good at. Again, it helps us with bits and pieces, but it can’t put it together in the moment.
I’m excited about the future and what AI can do to help us. I leverage AI tools everyday, helping me save time, do tedious things that I hate doing, even giving me ideas.
What worries me about our obsession with AI is we focus so much on what it can do, but we don’t look at what it can’t–the things that only we can do. Then I worry, do we have the skills and capabilities do to these things.
Until we address these, we will never leverage AI the way it could be leveraged. We will never achieve the goals we can and should achieve.
Afterword: I’m experimenting with a new AI tool that provides thoughtful discussions about this article. The recording is below. There is one small error, they refer to my friend Charlie Green as Green, Charles H. Enjoy!
Green Charles H. says
Too many words, Dave! I’m kidding, but the genius of this piece is its simplicity: whether you call it a paradigm shift, inside out /upside down, flip it and reverse it, it doesn’t matter. Rather than start with what AI can do, let’s start with what it can’t, and see what emerges from looking backward through the telescope. I think what emerges is quite a bit. You managed to emphasize the distinctive non-humanity of AI, and in the process, highlight both its potential and its limitations in a unique and powerful way. From such simple shifts in mindset, great insights and perspectives follow. You’ve still got me thinking…
Thank you.
David Brock says
Thanks so much Charlie! It’s always struck me this part of the AI conversation is missing. But we don’t talk about it and what it takes to fill those gaps. Sadly, if we don’t fill them, however much AI can do, we still fail.