We know trust and trustworthiness are critical in our ability to engage prospects and customers. It’s a foundation of our ability to develop and maintain relationships. We know what happens when, inadvertently or purposefully, we betray that trust.
There are other concepts intermingled with trust and trustworthiness. Integrity, consistency, meeting commitments, knowledge, honesty, values and value, caring are all elements of establishing and maintaining trust.
In a world in which AI role in buying/selling becomes increasingly important; in a world where we exploit LLMs to drive our ability to engage customers and for them to engage us, where do trust and trustworthiness fit?
We’ve all seen massive inaccuracies produced by the LLMs. Some because of the lack of currency in their data bases. Some because of their inability to discriminate between good and bad data. We’ve seen the ability of AI to live up to it’s name–creating things that are truly artificial, whether it is content, images, voices, or other things. We’ve created a new concept, “Deepfakes.” We are seeing, in other sectors, the use of AI in representing the work of others, have created huge challenges. AI has been a core issue in the SAG and Writer’s strikes.
While LLMs and other AI applications have great power to help us improve, as with most of these technologies, they represent a double edged sword. They can provide great insight and help, at the same time they can be horribly wrong and damaging. The challenge comes in figuring out which is which. What do we trust, what do we ignore.
As I was talking about this issue with Charlie Green, he shared a story of his own use of ChatGPT. He had asked for an assessment and insight about Charlie Green. What it returned was fascinating. A lot was very accurate about him, it cited some of his work, his writing, and other things. But it had also combined an assessment of another Charlie Green who had also written books and done interesting work–but in an entirely different field! To someone who didn’t know, this profile was “Charlie Green.” (Charlie didn’t appreciate it when I asked, “Is that composite person more interesting than you? I might want to get to know him…..”)
This challenge is not unique to AI and LLMs. We see it in social channels, we see it in the commercial and business media, we see it in all sorts of content we consume through so many channels. And much of it isn’t done maliciously, it’s often differing points of view. As a result we are inundated with information and we struggle with what makes most sense to us and our situation.
The challenge with LLMs is they have the potential of amplifying this problem beyond our abilities to recognize it and manage it.
While the technology companies will do what they can to manage this, their ability to do so is limited. We’ve seen the social platforms struggling and failing with this. We’ve seen our other information sources struggling, as well.
What do we trust? How do we gain confidence the information we are consuming is accurate?
While many of the technology suppliers recognize this problem and are looking at various solutions, including some regulatory solutions, what do we and what do our customers do? How do we figure out what is trustworthy?
We’ve actually had the solution for this for decades. It’s the human to human connection. It is people who are trusting and trustworthy working together, bringing their collective experience and knowledge to make sense of what they are seeing.
As Charlie pointed out, “For people in sales, this means the way to distinguish yourself from competitors who use AI to help sell is simple; be obviously human. Observe individual aspects of your customer; share personal aspects of yourself. Share your uniqueness, and notice your customers’ uniquenesses.”
And this is becoming more important and distinctive. As we see an increasing shift to digital interactions and engagement, accelerated by AI, human interactions become rarer. This means those that continue become even more important and impactful.
This is critical in personal trust, establishing confidence in our interactions with each other. Charlie, also, posed the issue of “institutional trust.” And in building institutional trust, it’s the aggregate of the behaviors of each individual in the organization. He posits, “If all people in a company behaved all the time in both trusting and trustworthy ways with all stakeholders, then I believe greater corporate trust would be greatly enhanced within a matter of months, whereas paying a PR firm millions can only result in skin-deep and unsustainable results.”
Trust and trustworthiness has been increasingly challenging in the face of overwhelming, conflicting information. LLMs and AI amplify these challenges at a scale that is unimaginable.
The constant through all of this is individuals interacting with individuals in consistent, caring ways. As Charlie states, “The way to distinguish yourself is be be obviously, relentlessly human!”
Afterword, thanks so much Charlie–and you are more interesting than the character ChatGPT has constructed 😉