Oaktree Capital chief calls AI advancement “unlike anything we’ve seen before,” flags job displacement fears while urging moderate, selective investing approach
Howard Marks, co-founder and co-chairman of Oaktree Capital Management, has released his latest client memo titled “AI Hurtles Ahead,” a follow-up to his December 2025 piece “Is It a Bubble?” — and the tone has shifted from cautious curiosity to something approaching awe.
The 11-page note, dated February 26, 2026, draws heavily from a bespoke tutorial Marks commissioned from Anthropic’s Claude AI model. The result is one of the most candid and intellectually engaged assessments of artificial intelligence yet to emerge from a figure of Marks’s stature in global finance.
‘It Read Like a Personal Note From a Friend’
In a striking admission for one of Wall Street’s most analytically rigorous minds, Marks opens by confessing that Claude’s 10,000-word output left him genuinely stunned. The AI had referenced frameworks from his own past memos — the sea change in interest rates, the pendulum of investor psychology — and deployed them as metaphors in the context of artificial intelligence. It anticipated counterarguments, injected humour and candidly acknowledged its own limitations.
“I’ve asked AI questions before and gotten answers back,” Marks writes, “but I’ve never received a personalized explanation like I did in this case.”
What made the difference, he explains, was not the AI itself but the quality of the prompting. His advisers had constructed a nine-module curriculum tailored specifically to his intellectual frameworks, his December memo and the precise goal of giving him enough technical grounding to write a credible public addendum. The lesson Marks draws is pointed: AI’s potential is probably being systematically underestimated today because most users do not know how to prompt it well. The limitation, he writes, is on the part of the users — not the model.
What Is AI, Really?
A significant portion of the early memo corrects a fundamental misconception Marks says he held until recently: that AI is essentially a very sophisticated search engine that retrieves and regurgitates data.
He now understands it differently. An AI model is trained — absorbing vast quantities of text not to store facts, but to learn how to think: how to understand reasoning patterns, how arguments are structured, how to generate new combinations of ideas and how to apply learned logic to novel situations. The analogy he reaches for is a baby developing cognitive capacity through environmental exposure. The baby is not born knowing things; it develops the ability to reason by absorbing inputs from the world around it. An AI model, Marks argues, is the same.
Once trained, a model enters what is called the inference phase — the rest of its operational life, spent responding to user prompts. This is where almost all of the economic value is currently being created and extracted.
Can AI Actually Think? Claude’s Answer May Surprise You
This is where the memo becomes genuinely philosophical, and Marks allows Claude to argue both sides with unusual length and candour.
The sceptic’s case is laid out cleanly: everything Claude knows came from human-written text. It has no experiences, no embodied understanding of the world. Everything it produces is a sophisticated rearrangement of patterns absorbed from existing human work. It is, the sceptics say, “a very talented cover band, not a composer.”
Claude’s rejoinder is sharp — and personal. It points directly at Marks himself. Everything he knows about investing, it notes, came from other people. Benjamin Graham taught him margin of safety. Warren Buffett taught him about quality. Charlie Munger taught him to draw on mental models from multiple disciplines. Every input was someone else’s thinking. The synthesis, however, was Marks’s own.
“The question isn’t where the inputs came from,” Claude argues. “The question is whether the system — human or artificial — can combine them in ways that are genuinely novel and useful.”
Marks concedes the point is completely true, noting it mirrors precisely how his own intellectual capacity developed as a young investor. But Claude goes further with what Marks calls “a convincing real-world argument.” Even if one accepts, philosophically, that what AI does is merely pattern matching and not true thought, the economic implications are identical — provided the work product is reliable enough to be useful. The philosophical debate about machine consciousness, Claude says, is fascinating. But the economic question is not whether AI truly understands. It is whether AI does the work.
The Three Levels That Explain Everything
At the analytical core of the memo is a three-tier taxonomy of AI capability that Marks says reframed his entire understanding of where the technology stands today.
Level 1 is Chat AI — the familiar back-and-forth of questions and answers. Useful for saving research and thinking time, but bounded: the model answers and stops.
Level 2 is Tool-Using AI — the model is instructed to search, analyse and execute tasks. The economic value is meaningfully larger here because execution time is saved, not just thinking time. But it remains bounded because AI only does what it is told.
Level 3 is Autonomous Agents — and this is where the ground shifts fundamentally. At this level, the user does not give instructions. The user gives a goal and parameters — length, scope, content, desired outcome — and the agent does the work, checks it and delivers a finished product. As the tutorial puts it: “This is labor replacement at the task level. Not assistance — replacement.”
According to Claude, AI was at Level 1 in 2023, Level 2 in 2024 and is now operating at Level 3. The distinction between Level 2 and Level 3, Marks writes, “might sound subtle. It isn’t. It’s the difference that determines whether AI is a productivity tool or a labor substitute. And that difference is what separates a $50 billion market from a multi trillion dollar one.”
‘The AI Helped Build Itself’
To illustrate just how fast things are moving, Marks turns to a blog post by Matt Shumer, CEO of OthersideAI, that has been viewed more than 50 million times in less than a month.
Shumer describes how the simultaneous release on February 5, 2026, of GPT-5.3 Codex from OpenAI and Opus 4.6 from Anthropic crossed a threshold he had not anticipated. He is, he writes, no longer needed for the actual technical work of his job. He describes giving the AI a product brief, walking away from his computer for four hours and returning to find a finished application — built, opened, navigated, tested, refined and approved by the AI itself before being presented to him for review.
The detail that most arrested Marks’s attention, however, was in OpenAI’s own technical documentation for GPT-5.3 Codex: the model, it stated, was instrumental in creating itself — used to debug its own training, manage its own deployment and diagnose its own evaluations. “Read that again,” Shumer writes. “The AI helped build itself.”
Marks adds that Anthropic CEO Dario Amodei has said AI now writes much of the code at his company and that we may be only one to two years away from current AI autonomously building the next generation.
What This Means for Investors
For Marks’s professional audience, the implications for investment management are both clarifying and unsettling.
On the positive side for AI as an analytical tool: it can absorb and process more data than any human investor, remember it more reliably, recognise historical patterns more accurately and do all of this without the distortions of fear, greed, recency bias or anchoring. “In other words,” Marks writes, “AI possesses a lot of the qualities one needs to be a good investor.”
On the other hand, it is missing several things that the best investors possess. Great investors must excel precisely where AI is weakest — in novel situations where no reliable historical pattern exists. They must make qualitative judgements about people, management and culture that resist quantification. And critically, they have skin in the game. AI does not feel the weight of a concentrated position or the visceral fear of capital loss. Its appetite for risk may not be appropriately calibrated by the kind of intuitive caution that separates the best investors from the rest.
Marks’s structural argument is sobering: just as passive indexation eliminated mediocre active managers who could not justify their fees, AI is likely to raise the bar further and push out investors who cannot outperform it on judgement, qualitative insight and the ability to reason about genuinely new situations. The number of investors who can credibly claim to do this better than AI, he implies, will be smaller than most currently assume.
So — Is It a Bubble? Marks Breaks the Question Apart
Marks is careful to note that “is AI a bubble?” is in fact several distinct questions, and his answers vary considerably depending on which one is being asked.
Is the technology itself a fad or illusion? No — he says with conviction it is very real and capable of vastly altering the business world. Is widespread application a distant dream? Also no — it is already in use at scale, with some 400 million individuals and 75–80% of companies already engaged with it.
Are infrastructure builders behaving wisely? Here he is more cautious. As in every prior technology wave, the headlong rush to build has historically both accelerated adoption and destroyed capital through malinvestment. There is no reason, he writes, to assume this time will be different.
Will infrastructure investment produce adequate returns? This question, he concedes, cannot yet be answered. The verdict will only be visible in roughly a decade.
Are current valuations rational? Here Marks draws his sharpest distinctions. Hyperscalers like Microsoft, Amazon and Google may be over or undervalued, but are unlikely to be ruinously mispriced. Established private AI firms like OpenAI and Anthropic have yet to IPO — their valuations will be revealed in due course. Early-stage startups commanding multi-billion-dollar valuations before announcing products are, in his words, lottery tickets. Most lottery participants end up with nothing.
He also raises a structural concern: some AI revenue is currently circular in nature, derived from AI companies purchasing services from one another. The underlying revenue chain must ultimately rest on end users paying for genuine economic value. How much current revenue remains circular, he notes, is an open question.
His final investment recommendation is unchanged from December: no one should go all-in without acknowledging the risk of ruin if things go badly, but no one should stay entirely out and risk missing one of the great technological steps forward. A moderate position, applied with selectivity and prudence, he writes, remains the best approach.
The Question That Keeps Marks Up at Night
The postscript may be the most personally felt section of the memo. Marks returns to the societal implications of AI-driven displacement — a concern he raised in December and says he has not shed.
He cites a friend of his daughter-in-law who heads an advertising copy department and estimates AI could replace 80% of her staff. He questions how many software engineers companies will need when Claude writes the code. He points to driving — one of the most common occupations in America — and notes that Waymo driverless vehicles already handle roughly a fifth of taxi trips in San Francisco.
Claude itself, quoted at some length in this section, frames the economic stakes starkly: a tool that makes an analyst 20% faster is worth roughly 20% of that analyst’s salary. A tool that does the analyst’s entire job on a defined category of tasks is worth the analyst’s entire compensation for those tasks — multiplied across every knowledge worker doing structured analytical work, from legal associates to financial analysts to compliance officers to software engineers. The annual labour value at stake, Claude estimates, runs into the trillions.
The optimists Marks has spoken with argue, as they always do, that every prior technological disruption — mechanised agriculture, industrial automation, the internet — was predicted to cause mass unemployment and did not. New jobs always materialised. Marks grants that this history is not unreasonable to extrapolate from. But he is neither futurist enough to imagine the new jobs nor optimist enough to trust they will appear on the timeline required.
“A friend wrote to me recently,” he concludes, “that he’d rather be an optimist and wrong than a pessimist and right. Me too. I wish I could be confident that my worrying is unwarranted.”