When an AI Algorithm Decides You're Well Enough

Key insights
- The algorithm was accurate only 20-25% of the time, yet employees were penalized for overriding it. When companies measure compliance with AI rather than patient outcomes, the AI becomes the decision-maker regardless of its accuracy.
- The compliance margin tightened from 3% to 1% over time, showing the system optimized for cost reduction, not better patient care.
- A class action lawsuit claims 90% of appealed nH Predict denials were overturned, but only 0.2% of patients ever appeal. The algorithm's real power is not accuracy: it is that most patients do not know they can fight back.
This is an AI-generated summary. The source video may include demos, visuals and additional context.
In Brief
In a new episode of BBC's AI Confidential, mathematician Hannah Fry interviews Amber, an occupational therapist with 20 years of clinical experience. Amber describes how she was hired to input patient data into an AI algorithm called Predict, which generated recommended discharge dates for hospital patients. According to Amber, the algorithm was right only about 20-25% of the time, typically predicting three to four days fewer than patients actually needed. She was eventually let go for not meeting the algorithm's targets.
Related reading:
What the Predict algorithm does
Amber worked as a care coordinator (a person who manages patient transitions between hospital and home or rehab). When she received a new case, she was given doctor's notes, admission records, and therapy evaluations. She would enter all of that information into a program called the Predict, which then generated a recommended discharge date.
The company behind the algorithm marketed it as learning from 6 million patients' experiences. The idea was that with enough historical data, the system could estimate when a patient would no longer need specialist nursing care. In theory, discharge prediction is not unreasonable. Patients generally recover better at home. But that only works if they are actually safe to go home.
Fry acknowledges this directly: there is nothing wrong with estimating discharge dates in theory. The problem, as Amber's story shows, is what happens when the estimate becomes a mandate.
A 75-80% failure rate
When Fry asks how often the Predict algorithm got it right, Amber does not hesitate. "Probably 20% of the time," she says. "20, 25%."
That means the algorithm was wrong for roughly three out of four patients. And the errors consistently went in one direction. The system did not randomly over- or underestimate. It almost always predicted three to four fewer days than patients needed. This is a critical detail. A system that randomly missed in both directions might be imprecise but neutral. A system that consistently cuts recovery time short has a built-in bias toward faster discharge.
Fry pushes on the obvious next question: with all of Amber's experience, could she use her own judgment to override the algorithm when she saw it was wrong?
"Wouldn't that be great?" Amber replies, laughing. The answer was no.
The tightening compliance window
The company set a compliance target: Amber's actual discharge dates had to stay within 3% of what the algorithm predicted. Over time, that margin was tightened to just 1%.
This is where the system shifts from being a tool to being a boss. A 3% margin already left little room for clinical judgment. At 1%, the algorithm's output was effectively the final decision. If a care coordinator consistently discharged patients later than the Predict recommended, they were told they were "costing company too much money."
The consequence was real. "This patient who is very sick and can't get out of bed, guess what? 10 days you're out of here. That's not okay," Amber says. She never met the compliance metric. That was part of why she was let go. "It was all about the dollar," she says.
The human cost of being the messenger
Perhaps the most striking part of Amber's account is not about the algorithm itself but about what it did to the people caught between it and the patients. Amber says she always told families she was "just the messenger" and did not make the decisions. Families screamed at her anyway.
"I wanted to just say, 'No, I 100% agree with you. I don't think that your mother should be discharged right now,'" she tells Fry. But she was not allowed to.
This is a pattern worth recognizing. When companies use algorithms to make unpopular decisions, the algorithm itself is invisible to the people affected. They see a human face, and they direct their anger at that face. The person delivering the news absorbs the emotional impact of a decision they did not make and cannot change. The system insulates decision-makers from consequences while exposing front-line workers to all of them.
The bigger picture: what the video does not name
The BBC segment never names the company behind the Predict algorithm. Amber refers to "a program" and "the Predict." But external reporting fills in the gaps.
Investigative reporting by STAT News (a Pulitzer-finalist series called "Denied by AI") identified the algorithm as nH Predict, built by naviHealth, now owned by Optum, a subsidiary of UnitedHealth Group. According to a class action lawsuit cited in the STAT News reporting, 90% of appealed nH Predict denials were eventually overturned. That number sounds like a devastating indictment of the algorithm's accuracy, but there is a crucial catch: only about 0.2% of patients ever appeal.
That statistic is the key to understanding how the system works in practice. The algorithm does not need to be right. It needs to be difficult to challenge. Most patients leaving a hospital or rehab facility are elderly, sick, and overwhelmed. They are not in a position to navigate an appeals process. The algorithm's real power is not its predictions. It is the friction it creates between a patient and the care they need.
How to interpret these claims
A tool or a decision-maker?
Companies that deploy algorithms like Predict often describe them as "decision support tools" that inform human judgment. Amber's account directly contradicts that framing. When compliance is enforced at 1% and employees are fired for deviating, the algorithm is not supporting a human decision. It is making the decision and using a human to communicate it.
The direction of optimization
The compliance window tightened from 3% to 1% over time. Notice what the company chose to optimize. It did not invest in making the algorithm more accurate. It invested in making employees follow the algorithm more closely. When your improvement strategy is "obey the machine harder" rather than "make the machine better," you have revealed which outcome you actually care about.
What the segment leaves out
This is a three-minute clip from a longer documentary. We hear only Amber's account. We do not hear the company's response, see the algorithm's internal validation data, or know how typical Amber's experience was. Her testimony is specific and detailed, but viewers should treat it as one perspective within a larger story. The STAT News investigation provides additional evidence and context.
Glossary
| Term | Definition |
|---|---|
| Discharge prediction | Using historical patient data to estimate when someone is ready to leave a hospital or rehab facility. |
| Post-acute care | Treatment a patient receives after leaving the hospital, often at a rehab facility or through home health services. |
| Care coordinator | A professional who manages patient transitions between hospital and home or rehabilitation. |
| Algorithmic decision-making | When a computer program makes or strongly shapes decisions that affect people's lives. |
| Compliance metric | A target number employees must hit, in this case staying within a percentage of the algorithm's predicted discharge date. |
| nH Predict | The name of the discharge prediction algorithm built by naviHealth, as identified by external reporting. Not named in the BBC segment. |
Sources and resources
Want to go deeper? Watch the full video on YouTube →