What You Know That AI Doesn't

Key insights
- 71% of Americans fear AI will destroy jobs, but the real risk is failing to apply what only humans can bring: judgment, context, and the ability to read between the lines.
- In all three stories, AI confidently optimized for the wrong thing. A 95% predicted deal close meant nothing when the client was internally restructuring. The skill that matters is knowing when to question the machine.
- Priya's case shows that optimizing for engagement can actively hurt a business. AI attracted bargain hunters to a premium brand. More followers is not the same as the right followers.
This is an AI-generated summary. The source video may include demos, visuals and additional context.
In Brief
71% of Americans believe AI will cause massive job losses, according to Priyanka Vergadia, a technologist who brings AI products to market for large tech companies. Speaking at TEDNext 2025, she argues this fear misses the point. AI is excelling at identifying patterns in data, but humans excel at understanding what those patterns actually mean. Through three real stories from her work, Vergadia shows that the professionals who thrive alongside AI are the ones who know when to question what the data says.
Related reading:
Story 1: The product manager who called the customers
Sarah was a product manager with an AI-powered analytics dashboard — a screen showing data and charts about how users behave in her product. The dashboard told her clearly: 80% of users only used basic features, and 20% used advanced ones.
The data was correct. But Sarah didn't stop there. She picked up the phone and called her top 20 customers to ask why. The answer had nothing to do with lack of interest. Users actually wanted the advanced features. They were just buried in menus, and the documentation was unclear.
Sarah's team rebuilt the experience, made the features easier to find, and a few months later, feature adoption (the share of users who actually use a specific part of the product) shot up sharply.
As Vergadia puts it: "AI saw the symptom. Sarah diagnosed the disease."
The lesson: When AI recommends something or shows a pattern, ask why. Don't treat the output as the final word.
Story 2: The sales lead who read the room
Marcus was using AI tools to make his sales team more efficient. The system analyzed emails and engagement to predict which deals would close. One deal had a 95% probability to close — the data showed positive sentiment (meaning the tone of messages seemed enthusiastic) and strong engagement metrics (high email activity, meeting attendance, and responses).
But Marcus noticed something the AI couldn't: different people were showing up to each meeting. The client's emails had become vague and full of corporate-sounding non-answers. Marcus dug deeper and found the client was going through an internal restructuring (a major reorganization of teams and decision-makers). Three separate teams each thought they owned the decision to buy. No one was actually moving forward.
The lesson: Read the room, not just the dashboard. Subtle facial reactions, changing decision-makers, the vague corporate reply to a simple question: our emotional radar picks these up. AI doesn't.
Story 3: The social media strategist who built the wrong audience
Priya worked with brands to grow their revenue through social media. Her AI tool told her to post fashion-hack videos: quick tips that tend to go viral. She did it, and the results looked excellent. Follower counts climbed. Engagement was up.
But when the brand's team looked at sales, none of that growth was turning into revenue. The videos attracted bargain hunters. The brand sold ethically made jackets at $200 each. The wrong audience had found them.
Priya stopped following the AI's content recommendations. Instead, she started posting about the sustainable production process and the stories of the artisans who made the clothes. Sales began to rise.
AI was optimizing for followers and engagement. Priya optimized for building a community.
The lesson: Always ask what the story is behind the data. Engagement numbers are easy to optimize. Whether you're reaching the right people is a human question.
The bigger picture
All three stories share the same structure. AI performs well on what it can measure: clicks, responses, activity, adoption rates. It misses what those numbers mean in a real human context — a customer who can't find a feature, a deal that's quietly collapsing, an audience that will never buy.
Vergadia's point is not that AI is unreliable. It's that AI output without human interpretation is incomplete. The anxiety about job losses is real, but the professionals most at risk aren't those competing with AI. They're the ones who stop asking "why?" and just follow the machine.
Glossary
| Term | Definition |
|---|---|
| Analytics dashboard | A screen that displays charts and numbers to help you understand how something is performing, such as how many users open a product feature. |
| Sentiment | The emotional tone of a piece of text: positive, negative, or neutral. AI can analyze large amounts of text to detect this automatically. |
| Feature adoption | The share of users who actually use a specific function in a product, as opposed to ignoring it. |
| Engagement metrics | Numbers that measure how much people interact with content — likes, comments, shares, email replies, and similar activity. |
Sources and resources
Want to go deeper? Watch the full video on YouTube →