Perhaps, one of the most thoughtful recent articles is a post by Joshua Bellin, The Web Without Us. It talked about the introduction, from several of the LLM suppliers, of AI-Native Web Browsers. In a side conversation, Joshua and I started talking more broadly about Agentic AI, and what it means for us as humans. The discussion, expressed in the title, is, “With the rise of Agentic AI, Do we risk losing our Agency?”
I’m casually tossing around slightly philosophical concepts, so let me dive in, perhaps more pragmatically.
When we talk about Agentic AI, we are talking about a tool where the system acts as an agent for us. It takes actions on behalf of us. We started using LLMs as partners to answer questions for us, to give us ideas. Perhaps, we asked them to research, we would use that research in the actions we took with our customers. It might write an email, that we would copy, perhaps tweak and then we would send it.
But with Agentic AI, we are asking these tools to take independent actions for us, to represent us and, hopefully, our best interests. Much of what we’ve seen with early Agentic AI is that it does the menial, tedious tasks that rob us of our time. It makes decisions for us and conduct transactions on our behalf.
For example, “Make a reservation for me at a great restaurant nearby. I’d prefer continental or Mediterranean cuisine. There will be the three of us….” The AI agent would search the restaurants, arrive at a decision, check our calendars, make the reservation, and post it on our calendars.” All we have to do is show up and enjoy (hopefully) the meal.
In our GTM roles, we are looking at these Agents, to act on behalf of our organization. They may function as SDRs having first conversations, trying to create a SQL, and schedule a meeting with the customer on our calendars. Or they may be customer service agents helping our customers use our offerings.
Now let’s talk about Agency itself and why it matters. We toss the concept of Agency around a lot, but I’m not sure we know what it means, or we may attribute different definitions to the word. When I talk about agency, what I am referring to is:
“A sense of control over our actions and their consequences.”
Alternatively, “Our capacity to make choices, act independently, shaping our own lives and circumstances.”
As we drill down into these concepts, Agency gives us the capacity to act. It’s not just about thinking about something or our dreams/wishes, but it’s the ability to translate those into action to achieve them.
In doing this, in taking purposeful action, we strive to improve our ability to achieve the desired outcomes. Doing this in a way that not only gives us control, but also ownership and meaning. By contrast, the absence of this puts us in the position of being passive “victims” to what is happening outside ourselves.
What I’ve been talking about is an intense focus on defining and achieving our goals, doing so very purposefully.
And with agency, there is an implied sense of responsibility for our actions and their consequences. Through agency we are responsible for what we do, never seeking to blame others.
For those with low agency, there are feelings of helplessness, loss of control, and drifting. There is a tendency to blame our inabilities to achieve our goals by external factors, blaming others, rather than taking responsibility for the choices we make.
Let’s get back to the core issue, “Do we risk giving up our Agency when we turn things over to Agentic AI?” Then perhaps, “How do we leverage Agentic AI to increase our Agency?”
Diving in, every time we carve off something, giving it to Agentic AI, we are giving away something that may have been important to us, or something which we feel we need control?
Sometime, as we implement these, it may seem easy, that it’s freeing our time. We turn over managing our calendars, our inboxes. Rather than actually talking to people, we turn those conversations over to the Agents. But over time, we may not understand why certain actions are being taken. We can’t explain the reasoning, or we don’t understand how these are being perceived. We stop feeling “in control,” because we have ceded control. And in that process we lose Agency.
At this point, you might be thinking, “Is this all about being a control freak?” After all, if we look at high impact leaders, they delegate control and responsibility to their people. And it’s this concept of delegation that helps us understand the difference between what we do with our people versus what we do with our Agents. Part of it is our shared values, purpose and goals. Part of it is we understand their reasoning and how they will do these things. This happens because we have trained/coached them in doing those. Delegation is a purposeful design. We invest in building their capabilities to act, we train, we coach, we develop them, and in the process we build trust and confidence. Not only our own confidence they are doing the right things, but their own confidence in their ability to do these things
What we are really doing, if we do this well, is building Agency. Not just our own, but that of our people.
But when we give up control to an Agentic System, much of this isn’t happening. In essence, we are turning over control to a black box. We have no understanding and little influence of what’s happening in that black box. It doesn’t share our values or purpose. We don’t understand how or what it does, we just see the end result. The models on which these systems are built are not based on deep cultural, purpose, and contextual alignment. They are technical and probabilistic. They don’t have the historical context, background, or understanding that we use every day in exercising our Agency.
What do we do? Clearly, Agentic AI offers so much, but how do we maintain, perhaps better manage, our Agency?
Some thoughts:
- There may be some things that are unimportant, in the scheme of things, for which our Agency is really not threatened. Maybe it buys tickets to a movie I didn’t like, or the wrong restaurant, or other things. This has little impact on our Agency, but we may, periodically, refine the Agent behavior, “Stop sending me to top rated pizza joints!”
- Be cautious about what we are delegating. Just because it can do my emails, do I really want the Agent to do these, particularly for the important/impactful things? Do we want to delegate calendar management? Often, it’s not finding the available time, but I want to look at when and where I may want to schedule a certain meeting, and that decision is made in the context of what I know has happened, what I would like to see happen, and the things surrounding the meeting. For example, I may have time to make a Zoom call while I’m sitting in an airport lounge, but do I really want to do it there?
- When we do delegate to Agents, apply the same principles we use in delegation to our people. Make sure the Agent really understands what you want them to achieve, what is the Job To Be Done? Equip them with the knowledge, training, context that will enable them to do the job. Give the boundaries, create the guardrails to minimize the possibility of making mistakes. Continue to invest in building the capabilities of the agent, just like you do with your people.
- Again, like we do in delegating to our people, stay in the relationship. We sit down with our people, reviewing and refining what they do, we need to do the same with the Agents. We cannot let them run wild.
- Retain accountability. We own the outcomes of the work the Agent does, the Agent never will. While we might share accountability with our people, this is impossible with an Agent. It simply can’t care.
Somehow, we tend to disconnect how we build Agency for ourselves and our teams, with how we think of our Agency in leveraging Agents. The challenge is, where possible, leverage the same principles we do with our people. At the same time recognizing the very real, potentially dangerous limitations of these Agents.
Afterword: This is the AI based discussion of my post: With The Rise Of Agentic AI, Do We Risk Giving Up Our Agency? This is one of the best discussions I’ve heard. In my article, I’m a bit philosophic and abstract. This discussion simplifies it, without losing any of what I was trying to communicate in the post. Be sure to listen to this. Enjoy.
Leave a Reply