Skip to content
Back to articles

Anthropic's Push to Bring Claude to Wall Street

March 4, 2026ยท7 min readยท1,388 words
AIFinanceAnthropicMCPVideo Summary
Anthropic video on Claude for Financial Services featuring Alexander Bricken and Nick Lin
Image: Screenshot from YouTube.

Key insights

  • NBIM's 9,000 portfolio companies are queried daily through custom MCP integrations built by the fund's own team
  • Anthropic frames Claude's coding ability as the key advantage for finance: code skills translate directly to Excel models and data analysis
  • Anthropic is investing in finance-specific pre-training and post-training to close domain knowledge gaps
SourceYouTube
Published October 27, 2025
Anthropic
Anthropic
Hosts:Alexander Bricken, Nick Lin

This article is a summary of How Claude is transforming financial services. Watch the video โ†’

Read this article in norsk


In Brief

Anthropic has built a dedicated financial services product around Claude, and two of its team leads lay out what that means in practice. Alexander Bricken, who leads applied AI engineering for financial services, and Nick Lin, product lead for the same division, walk through how major institutions like NBIM (the Norwegian Sovereign Wealth Fund) and BCI (British Columbia Investment Management) are using Claude in production. The pitch: Claude's strength in coding translates directly to financial analysis, Excel modeling, and creating investor-ready documents.

9,000
NBIM portfolio companies queried via Claude
3 layers
models, agentic capabilities, platform
6 months
for S&P and FactSet to ship MCP servers

From curiosity to production

Lin describes a clear shift in how financial institutions approach AI. A year and a half ago, companies would set up a generic chat interface with several AI models and let employees experiment. Today, the focus has moved to building production systems (1:22).

The most striking example is NBIM, which manages Norway's sovereign wealth fund. The fund oversees roughly 9,000 portfolio companies. NBIM's team built their own integrations using MCP (Model Context Protocol, an open standard that lets AI models connect to external data sources and tools), so that portfolio managers now query Claude daily for insights into their holdings (1:47).

Lin frames this as more than just working faster. Analysts spend less time on manual data gathering and more time on what actually matters: understanding business models, building relationships, and meeting clients (2:17).


Why coding skill matters for finance

Bricken makes an argument that might surprise non-technical readers. Claude's ability to write code is the same skill that makes it effective at financial analysis (4:39).

The logic: financial analysts spend enormous time getting spreadsheets and presentations pixel-perfect. An AI that can write structured logic (code) can apply the same precision to building Excel models, PowerPoint decks, and data analysis. Anthropic's file creation feature works by running Python code inside a virtual machine (an isolated computer environment), which lets Claude generate Excel documents and financial models directly (6:16).

Bricken notes this extends beyond spreadsheets. Give the model a tool, and it understands what that tool does from its name and description alone. Combined with MCP, this means Claude can chain together multiple systems: pulling data from Snowflake, matching it against a Salesforce record, running calculations, and producing a formatted output (8:44).


Retrieve, analyze, create

Lin outlines a three-part framework for how Claude handles financial work (6:50):

Retrieve. Research agents (AI systems that autonomously search for and gather information) can read data roughly 5,000 times faster than humans. The goal is connecting Claude to the specific data sources that analysts already use: S&P Global, FactSet, internal databases (7:00).

Analyze. Financial models are not just pretty spreadsheets. They capture an analyst's judgment about a company's future. Claude needs to understand core finance concepts well enough to manipulate these models through code or spreadsheet formulas (7:43).

Create. The final output has to be boardroom-ready. Spreadsheets, PowerPoint decks, and Word documents need to meet the standard that managing directors and clients expect (8:10).


BCI's live dashboards

The most concrete customer example comes from BCI. Analysts traditionally perform comps analysis (comparing financial and operational metrics across companies to assess relative value) in a static Excel sheet, refreshed manually every week or quarter (12:02).

BCI replaced this with a live Artifact connected directly to S&P and FactSet data. Instead of rebuilding the spreadsheet manually, analysts prompt Claude to update the dashboard. These artifacts (interactive outputs Claude creates, like dashboards and documents) are shared with managing directors, who interact with the platforms directly (12:27).

Lin frames this not as acceleration, but as transformation of the work itself (12:46).


Memory and continuous improvement

Bricken raises a feature that sounds mundane but matters enormously in practice: memory. Claude needs to remember preferences across sessions and tools (12:53).

If an analyst corrects a formula or specifies which data source to use for EBITDA (a company's earnings before interest, taxes, depreciation, and amortization) calculations, Claude should store that preference. Lin compares it to a good intern who learns your habits over time (14:09).

This extends across surfaces. Claude operates in Claude.ai, in a browser extension, in Excel. The memory system needs to carry context across all of these touchpoints so the model continually improves through its interactions (13:29).


What comes next

Lin outlines three priorities for the product roadmap (14:30):

Finance-specific training. Anthropic is investing in both pre-training (initial training on large datasets) and post-training (refinement for specific tasks) tuned specifically for finance (14:53).

Sub-vertical specialization. Private equity, hedge funds, insurance, and investment banking have different workflows. The product team wants to build components tailored to each (15:02).

Industry partnerships. S&P Global, FactSet, and PitchBook have already published working MCP servers within six months of the protocol's launch. Anthropic wants to expand these partnerships further (15:49).

Bricken adds that the most useful signal comes from enterprise customers who design evals (standardized tests measuring AI performance). These evaluations feed directly into Anthropic's training pipeline, creating a feedback loop between customer needs and model improvements (16:34).


How to interpret these claims

Two things worth noting when evaluating Anthropic's pitch.

This is a marketing video

Both speakers are Anthropic employees presenting their own product. The customer examples (NBIM, BCI, Perella Weinberg) are real but curated. We hear about successes, not about failed deployments or limitations. Independent reporting on Claude's financial services performance is limited.

The domain gap is real

Lin acknowledges that Anthropic has very few people with investment banking experience (10:00). The company's solution, working closely with early customers to build evaluations, is sound in principle but depends on having enough diverse customers to cover the breadth of financial use cases. Finance-specific pre-training and post-training is still described as an upcoming investment, not a shipped capability.


Practical implications

For financial professionals

If your organization is evaluating AI tools, the retrieve-analyze-create framework offers a useful lens. The key question is not whether the AI can chat, but whether it can connect to your actual data sources, run real analysis, and produce documents that meet your standards.

For AI practitioners

The MCP adoption by major data providers (S&P, FactSet, PitchBook) within six months signals that the protocol is gaining traction in enterprise settings. Building MCP integrations may be increasingly expected by institutional clients.


Glossary

TermDefinition
MCP (Model Context Protocol)An open standard that lets AI models connect to external data sources and tools. Think of it as a universal adapter between AI and the software you already use.
Comps analysisComparing financial metrics (revenue, profit margins, growth rates) across similar companies to assess whether they are fairly valued.
DCF (Discounted Cash Flow)A valuation method that estimates a company's worth by projecting future cash flows and adjusting them for the time value of money.
EBITDAEarnings before interest, taxes, depreciation, and amortization. A measure of a company's operating profitability that strips out financing and accounting decisions.
ArtifactClaude's interactive output feature. Instead of just text responses, Claude can create live dashboards, code, spreadsheets, and documents that users can interact with.
Agentic AIAI systems that go beyond answering questions. They can take actions, use tools, and complete multi-step tasks autonomously.
Pre-trainingThe initial phase where an AI model learns patterns from vast amounts of text data. This is where general knowledge and language ability come from.
Post-trainingRefinement after pre-training, where the model is tuned for specific tasks or domains. Also called fine-tuning.
Evals (evaluations)Standardized tests that measure how well an AI performs on specific tasks. In finance, an eval might test whether the model builds a correct DCF model or retrieves accurate earnings data.
Virtual machineAn isolated computer environment running inside another computer. Claude uses virtual machines to safely execute Python code when creating Excel files and other documents.
Deep researchA Claude feature that performs thorough, multi-step research by searching multiple sources, synthesizing findings, and producing comprehensive reports.
Portfolio companyA company that an investment fund owns shares in. NBIM, managing Norway's sovereign wealth fund, holds stakes in roughly 9,000 companies worldwide.

Sources and resources