Skip to content
Back to articles

LG Uplus and OpenAI Build AI That Handles Customer Calls

April 6, 2026/4 min read/856 words
AI AgentsOpenAIChatbotsGenerative AI
LG Uplus and OpenAI engineers collaborating on Agentic AICC
Image: Screenshot from YouTube.

Key insights

  • The shift from 'AI that answers' to 'AI that acts': the new system checks account status, retrieves policies, and completes tasks without forcing customers through rigid menus
  • OpenAI's Realtime API is speech-to-speech with no transcription step, so the AI picks up tone, pacing, and inflection for a more natural interaction
  • Traditional call center AI is a brittle decision tree: when customers go off-script, it breaks. LLMs can hold context, reason, and adapt across turns
  • This is not a telecom-specific solution. The same pattern applies to IT support, accounting, e-commerce, and healthcare: any industry handling customer interaction at scale
SourceOpenAI
Published April 6, 2026
OpenAI
OpenAI
Hosts:Daniel (Solutions Architect, OpenAI)

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

LG Uplus, South Korea's second-largest telecom provider, has built a next-generation AI contact center (AICC) together with OpenAI. The system goes beyond answering questions. It understands what customers actually need, checks their accounts, retrieves policies, and resolves issues, all inside a single voice conversation. Daniel, a Solutions Architect at OpenAI, walks through what makes this different from traditional call center automation and why the approach has implications far beyond telecom.

From menus to action

Think about the last time you called a phone support line. You probably navigated a menu ("press 1 for billing, press 2 for technical support") and each answer locked you into a predetermined path. That is a decision tree, and it has been the foundation of automated customer service for decades.

The problem is that real conversations don't work that way. Customers jump between topics, interrupt themselves, and ask follow-up questions that weren't anticipated when the menu was designed. When that happens, the system breaks. You get transferred, or you get stuck.

LG Uplus's system works differently. As Daniel explains at 1:42, traditional rule-based AI contact centers are "great when the world is predictable, but we know that it never is." The agentic approach replaces the decision tree with an AI that understands natural language, keeps context across the entire conversation, and can take real actions: retrieving the right policy, checking account state, or initiating the next step, without forcing the customer into a rigid flow.

This is what "agentic" means in practice. The AI doesn't just route you. It acts on your behalf.

The voice layer: no transcription in the middle

One choice that stands out is how LG Uplus handles voice: they use OpenAI's Realtime API instead of the older approach. Most voice AI systems convert your speech to text, run it through a language model, then convert the response back to speech. Every conversion step adds latency and loses information.

The Realtime API skips the transcription entirely. As Daniel puts it, it is "natively speech to speech," which means the model processes the audio directly. The AI can hear whether you sound frustrated, whether you're speaking hesitantly, whether you interrupted yourself mid-sentence. That context is part of what shapes its response.

The practical result is a conversation that feels less like talking to a phone menu and more like talking to a capable person. The AI handles interruptions naturally, responds without noticeable delay, and doesn't lose track of what you said three turns ago.

It's also a fully event-driven API (meaning it reacts to things as they happen rather than waiting for a full turn to complete), which lets it trigger actions mid-conversation. While you're still talking, the system can already be checking your account in the background.

The key shift: from scripted routing to reasoning

What changed to make this possible? The honest answer is large language models (LLMs), AI systems trained on vast amounts of text that can understand context, follow multi-step reasoning, and respond to instructions they've never explicitly seen before.

Daniel describes what OpenAI observed working with the LG Uplus team: "a key shift from scripted routing to a system that can reason, adapt, and stay grounded with the right safeguards." The previous generation of AICC required engineers to write out every possible customer path in advance. The new system handles paths it was never explicitly programmed for, because it understands what the customer is trying to accomplish.

LG Uplus also built a Planning Agent on top of this, which uses checklist-based workflows to break complex service requests into ordered steps and execute them reliably. This is the same pattern showing up across enterprise AI: not a single model doing everything, but a structured system of agents each handling what they're good at.

A blueprint, not a one-off

The LG Uplus case is interesting not just as a telecom story, but as a proof of concept for any organization that handles customer interaction at scale.

Daniel makes this explicit: "This isn't just a carrier-specific or industry-specific innovation. It's actually a blueprint for how next-generation contact centers are going to work globally." Every market is looking for the same things: faster resolution times, more natural voice experiences, and operations that can scale without linearly scaling headcount.

The same architecture fits an accounting firm handling client calls about invoices. An IT support desk fielding password reset requests alongside network outage escalations. An e-commerce company managing returns, order status, and product questions in a single conversation. The underlying problem is identical: a customer needs something done, not just answered.

What LG Uplus and OpenAI built is an early example of this shift moving from pilot to production. Daniel describes the next phase as "scaling what's working and making it even more operationally strong": deepening evaluation, monitoring, and structured experimentation as the system expands to more use cases and enterprise customers.

The technical pieces are in place. The question now is how fast the rest of the industry catches up.

Glossary

TermDefinition
AICC (AI Contact Center)A customer service center where AI handles calls and service requests instead of human agents
Agentic AICCAn AI contact center where the AI doesn't just answer questions but takes actions: checks accounts, changes plans, opens tickets
Realtime APIOpenAI's interface for voice-to-voice AI conversation with low latency. No text transcription in the middle.
Speech-to-speechAI that processes audio directly without converting to text first, preserving tone, pacing, and inflection
Decision treeA rigid step-by-step system where each customer answer leads to a fixed next question. The basis of traditional phone menus.
Planning AgentAn AI that breaks a complex task into ordered steps and executes them using checklist-based workflows
LLM (Large Language Model)An AI system trained on large amounts of text that can understand context, reason, and respond naturally. The technology powering the shift from scripted routing to adaptive conversation.

Sources and resources

Share this article