Skip to content
Back to articles

Are Humans Useless in the AI Workplace?

April 5, 2026/7 min read/1,320 words
AI and EmploymentAIAI in EducationKlarnaGenerative AI
BBC AI Decoded hosts and guests discussing the future of work and AI in the workplace
Image: Screenshot from YouTube.

Key insights

  • AI is hollowing out jobs rather than eliminating them outright, stripping away repetitive tasks and leaving the strategic, relational core. This demands a fundamental rethink of what work actually is.
  • Employees are already using AI tools without manager awareness, quietly replacing colleague interactions with chatbots and eroding the social fabric that makes organizations function
  • Klarna cut roughly 700 jobs with an AI chatbot, then had to reverse course when quality dropped and customers asked for human empathy. Short-term cost savings can become long-term brand damage.
  • Cutting entry-level roles creates a diamond-shaped organization with no talent pipeline, turning today's efficiency gain into tomorrow's leadership crisis
SourceBBC News
Published April 3, 2026
BBC News
Hosts:Christian Fraser
bernardmarr.com
Guest:Bernard Marrbernardmarr.com

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

BBC AI Decoded poses the question that most managers are quietly dreading: if AI could handle everything your team does, who would you actually keep, and why? Presenter Christian Fraser is joined by Bernard Marr, AI and business author and Forbes columnist; Ella Hafermalz, associate professor at Vrije Universiteit Amsterdam; and Priya Lakhani, CEO of Century Tech and co-host of the programme. Together they explore how AI is reshaping work right now, why Klarna's chatbot experiment backfired, and what it means for the next generation entering the workforce. They also hear from Larry Fink, chairman and CEO of BlackRock, on why the post-war push to send everyone to university may have gone too far.

Three Types of AI Reshaping Work

Bernard Marr opens with a framework that cuts through the noise. AI is "almost like having a genie on your shoulder": a superpower that lets individuals do work that previously required a whole team. But he identifies three distinct waves, not one.

Generative AI (the chatbots we already know) handles data analysis, drafting, and research. A financial advisor can offload number-crunching to a model and spend more time building client relationships. Agentic AI goes further: it operates software, fills out forms, browses the web, and runs repetitive workflows autonomously. IBM, for example, has automated many of its routine HR tasks, freeing staff to focus on organizational strategy rather than answering the same pension-contribution questions on repeat. Then there is physical AI: humanoid robots that can be trained simply by watching videos of humans performing tasks, and are now starting to understand the physical world well enough to act in it.

The through-line across all three is that AI is not replacing jobs wholesale. It is hollowing them out. It takes the parts most people find tedious and handles them, leaving the strategic and creative core behind. Whether that feels like liberation or irrelevance depends entirely on how organizations respond.

The Shadow AI Problem

While executives debate AI strategy in boardrooms, their employees have already made a decision. Ella Hafermalz has spent the last two years studying what actually happens when workers start using AI tools, and her findings are striking.

People adopted these tools quietly, without approval, starting at home and gradually bringing them into their work: brainstorming, search, structuring information, polishing text. The motivating factor was telling: employees were "so happy to not have to bother their colleagues with questions." Why track down a manager or a business partner who knows a specific framework when ChatGPT will answer in seconds?

The convenience is real. But Hafermalz argues the cost is invisible. The organic, social learning that happens between colleagues, the informal knowledge transfers, the relationships built through those "silly questions": all of it is quietly disappearing. Organizations exist to share knowledge and check the quality of each other's work. When everyone stops asking and starts querying chatbots instead, that function erodes in ways managers often cannot see until the damage is done.

Buying Copilot Licenses Is Not an AI Strategy

Priya Lakhani offers a sharp-eyed assessment of where most companies actually stand. CEOs tell her they're "using AI," and what they mean is they've bought Copilot licences for all their staff and people are finishing off their emails a little faster. Meanwhile, an MIT report found that 95% of organizations are not getting return on investment from AI, and a McKinsey report put the share of companies actually achieving ROI at just 1%.

The gap between deployment and transformation is enormous. Real AI leverage looks like the audience question read out near the end of the programme: a listener who runs operations across two organizations explains that AI has allowed a small team to produce the kind of work that previously required external agencies and consultants — bringing it in-house entirely. That is operating leverage. Finishing emails faster is not.

The Klarna Lesson

The most concrete cautionary tale in the episode comes from Klarna. The Swedish fintech replaced roughly 700 customer service jobs with an AI chatbot powered by OpenAI. The cost savings were real. But by December, the company had reversed course, announcing a major recruitment drive. Two problems had emerged: the quality of work delivered by the AI was lower, and customers wanted a human in the loop. They wanted empathy. They wanted someone who understood context and carried judgment.

Hafermalz connects this to a broader point about AI hype creating decisions that get regretted. AI can look like it's producing good-quality output (code, writing, responses), and it takes a trained eye to spot where it falls short. The Klarna story is a reminder that when human judgment and empathy are the actual product, cutting the humans is not a cost saving. It is cutting the product.

The Diamond-Shaped Danger

Bernard Marr raises a concern that is easy to miss in the efficiency conversation. If companies keep cutting entry-level roles because AI can handle junior tasks, they end up with what Hafermalz calls a "diamond-shaped" organization: a few people at the top, a heavy middle of experienced workers, and almost no one at the bottom.

The problem is the funnel. Entry-level positions are not just cheap labour. They are how organizations train the next generation of senior staff. The accountant learning spreadsheets and P&L accounts today is building the foundation for the strategic finance role five years from now. Cut that entry point and you cut off your future talent pipeline. You also lose the informal interactions that help newcomers learn how things actually work: the mentors, the "silly questions," the gradual accumulation of institutional knowledge that cannot be put in a document.

Marr's framing is direct: "If I cut those roles, I also cut off my future employees."

Rethinking Education

Larry Fink's clip cuts against the grain of decades of educational consensus. The post-war push to send everyone to university, he argues, went too far — and the result is that many people who would have thrived as skilled tradespeople ended up in banking, media, or law instead. AI is now accelerating the need to rebalance.

Priya Lakhani builds on this with a three-layer model for education in the AI era. Foundational knowledge still matters: you cannot develop judgment by Googling everything. On top of that sits applied knowledge: how things actually work in the real world, solving real problems. And the third layer is what she calls learning agility: the ability to keep learning, to adapt to conditions that don't yet exist, so that people leaving formal education don't arrive at the workplace expecting a conveyor belt that no longer runs.

Ella Hafermalz, who teaches a course at Vrije Universiteit Amsterdam specifically asking "Who am I in the age of AI?", frames it similarly. The point is not to ban AI, since it's already part of students' lives. The point is to help them think clearly about what they bring to the table that AI cannot: judgment, relationships, perspective, and the ability to ask the right question in the first place.

What Work Actually Means

The conversation ends without a neat resolution, which feels honest. AI does not replace humans, at least not in the blunt, mass-unemployment way the headlines suggest. What it does is reveal what is irreducibly human about the work we do. What remains after AI handles the tedious parts: judgment, empathy, strategic thinking, the ability to build real relationships with real people. Not the leftovers. The point.

The risk is that organizations focus so hard on cutting costs today that they fail to notice they are dismantling the conditions for those human capabilities to exist at all. No talent pipeline. No mentors. No organic learning. No social ties. A highly efficient organization, for a while, until the knowledge and culture that made it worth working for quietly disappear.

Glossary

TermDefinition
Agentic AIAI that can take actions: operating software, filling out forms, browsing the web, rather than just answering questions
Shadow AIWhen employees use AI tools at work without official approval or manager awareness
Diamond-shaped organizationAn org structure with few entry-level roles and a heavy experienced middle, creating a gap in the talent pipeline
Operating leverageUsing AI to let a small team do work that previously required external agencies or consultants

Sources and resources

Share this article