Every week, we are deluged with articles about the impact of AI on jobs and employment, particularly the disappearance of entry-level roles. Senior executives announce headcount reductions. Analysts publish projections.
The concern is real. Data points are cherry-picked; entry-level coding, legal research, consultants dedicated to generating excel charts and powerpoints, accounting/finance, HR, and the list goes on. In GTM, we see junior level marketing, selling, and customer service jobs being displaced with AI agents.
What’s missing in all of this celebratory hand-wringing are the serious questions about what’s actually happening and what it means to the future of organizations.
If we cut the majority of entry-level white collar and professional jobs, where does mid-level talent come from? Five to ten years from now, the supply line for those mid-level jobs has dried up, and this ripples through every level of the organization. Where do the senior leaders of 2035 develop the instincts, the experience, the professional judgment? Where do they develop the time-tested scars critical in those roles?
We are so focused on the current disruption that we have forgotten what entry-level work was actually for.
It was never primarily about the output. Grinding through spreadsheets, sending outbound emails, ramping calls, generating decks. We never called it apprenticeship, but that is what entry level jobs have always been.
The work was repetitive and heavily supervised because that is how professional judgment and experience is developed. Junior consultants learned to understand real business problems before anyone let them talk to clients. SDRs and BDRs developed enough understanding of customers and conversations to eventually carry a full quota as account executives. The judgment critical to middle and senior roles was developed by accumulating experience in each of their prior roles, starting with their first jobs.
When we eliminate these roles in the name of efficiency, we are not just changing headcount. We are breaking up the supply chain for the roles critical for success in growing any organization.
We’ve been through this before, the difference is the rate of change. Where industrialization destroyed agrarian jobs, robotics and mass manufacturing changed factory work, typing pools and switchboard operators disappeared, and changes in IT technology moved us from Assembler to much more sophisticated ways of coding, new roles always emerged.
These new jobs emerged, people and organizations adapted. What’s different in this transformation is the speed with which it is happening. The previous shifts happened over years and even decades (for some of the earlier ones.) They gave us time to understand the changes and develop the supply chain for the new roles.
Today, the prevailing opinion is, “This time it’s different!” Perhaps it is, but as I look back in recent history, past shifts happened in a fairly short period of time. I happened to be indirectly involved in the shift from typists, to word processing, to PC based tools that eliminate the need for word processors. This complete change was less than 10 years. Likewise, I saw similar shifts in IT coding roles.
In every one of these past cases, new roles emerged, people adapted, and the workforces restructured themselves around different kinds of work. What is different today is the speed of the transition, and that is a legitimate concern.
We are only a few years into serious AI adoption. We are seeing as many organizations backtrack on changes they have made, because the expected results aren’t happening, as we are seeing bold declarations of transformation. This is familiar ground.
There is, however, another dimension of this disruption that almost no one is discussing. Organizations are using AI as excuse for corrections that should have been made years ago. In many cases, rather than fixing the underlying problems, they are using AI to do the same broken things faster and at lower cost, but calling it progress because it’s now an AI agent!
The SDR and BDR roles in sales are the most visible example. Response rates on outbound prospecting had been collapsing for years before anyone seriously deployed AI in those functions. The model was already broken. Spray-and-pray outreach at industrial scale was never real selling, and the metrics showed it before the industry admitted it.
These roles weren’t developing the next generation of great salespeople; they were generating activity that looked like pipeline while producing diminishing returns, and training an entire group of early-career sellers that selling means interrupting strangers at volume. AI didn’t kill the SDR. The SDR model killed itself.
And it has never been the apprenticeship in developing the AE talent needed. We see this in declining win rates and reps making quota. The entire model has been breaking!
What makes this worse is that too many organizations have responded by keeping the broken model intact and simply replacing the humans with AI. Headcount is down. Spending is down.
They are now scaling a failed model with greater efficiency, scaling dysfunction, and measuring success by the volume of the failure. The AI hasn’t fixed anything. It has just made the mistake cheaper and faster and harder to see.
The same pattern appears across professional services. Routine legal document review performed by junior associates at enormous cost to clients was already an inefficient artifact of how law firms structured billing, not a necessary developmental experience. The same is true in virtually every sector of professional services. It has always been easier to throw labor at things that could have been restructured or eliminated, but doing that required asking whether the organization was creating real value or simply finding ways to bill for more time. AI is not forcing that question. In most cases, it is allowing organizations to avoid it.
The talent pipeline problem and the structural correction problem are not separate. They are the same problem presented differently. We have eliminated roles that were already failing, calling it transformation. But we are doing the same things, only with AI agents.
We are not asking what really needs to change. Who are our real customers? What does value creation mean to those customers? How do we develop the skills and capabilities to do this? What does it mean to how we deploy technology? What human skills are needed? How do we develop these skills? What does it mean to develop and lead an organization in doing this? How do we develop these leaders? Are we building the foundations for the next generation of capability?
These questions need to be answered. They represent the future of our organizations and how we create value with our customers. Without these, we miss the real opportunities to grow. We become unprepared to understand and develop the human skills needed to support these strategies.
The person who evaluates AI output with genuine professional skepticism; who knows when the model is confidently wrong and why it matters is not a prompt engineer. This important role requires judgment, and judgment is learned. The person who designs how humans and AI work together inside a complex organization is not a technologist. This person is a workflow and organizational designer who understands what AI can and cannot do, and where the human connection is critical.
The salesperson who can have a real, substantive, trust-building conversation with a customer; who can help them navigate the politics, the risk aversion, and competing agendas is more important now than five years ago, because AI has absorbed everything else. But where do these people learn to have these conversations? What do the first two years look like? Who is designing that experience?
The coder who is not writing boilerplate, but is doing architecture and quality reviews. The individual that can make the judgment calls about where AI-generated code will fail in ways that matter. That requires experience. Where does the twenty-two-year-old who wants to develop into that person spend the years between graduation and genuine competence?
These are not rhetorical questions. They are operational problems that organizations are creating for themselves right now, in real time, without acknowledging it.
Here is the issue at the center of all of this. The organizations eliminating entry-level roles are making an implicit bet that they will be able to find or develop the mid-level and senior talent they need in the future. We can already see what that bet looks like in practice. Meta announces layoffs of hundreds while simultaneously spending hundreds of millions of dollars in compensation packages to recruit a handful of elite AI researchers.
The logic is seductive: why develop talent when you can buy it? But the math doesn’t work at scale, and it never will. You cannot replace a pipeline with a handful of expensive exceptions. The people you are spending hundreds of millions to acquire today were developed somewhere, by someone, over years of accumulated experience.
And while organizations like Meta might be able to do this for a while, it is something that few organizations can do today and sustain over the years.
When you eliminate the apprenticeship, you eliminate the source. You are not solving the talent problem. You are borrowing against a supply you are simultaneously destroying.
Most of the organizations making these decisions have not examined that contradiction. They are optimizing for this quarter’s cost structure and this year’s earnings announcement, and ignoring the next decade. That will belong to someone else, today too many leaders focus on the current month, quarter, and year. But they aren’t building organizations that can be sustained.
The disruption is real. Some of it is overdue. But we are not going to build anything worth leading by scaling what is broken, eliminating the apprenticeships we need, and calling cost reduction a transformation.
So here is the call to action, and it is directed at the executives making these decisions right now. Stop asking where else you can apply AI to automate what has already failed. Start asking what genuine value creation looks like in a world where AI handles the routine. Then work backward from that answer to understand what kind of people you need, what they need to learn, and how you are going to develop them.
That means designing new entry-level roles that build real judgment, not just deploying agents to replace the old ones. It means treating talent development as a strategic investment with a ten-year horizon, not a cost line to be minimized. It means being honest about which roles you eliminated because AI made them unnecessary and which ones you eliminated because it was convenient and you needed a headline.
The leaders that ask these questions seriously, starting now, are the ones that will have something to run in 2035. The ones that don’t will be spending hundreds of millions they don’t have, chasing talent that doesn’t exist, wondering what happened to their pipeline.
The important conversation is barely beginning, but it’s the most important conversation we must be having now.
Afterword: Here is the link to an outstanding AI generated discussion about this post. Enjoy!

Leave a Reply