By Adam Levine
On Tuesday, the roof caved in on software, media, and information company stocks like Salesforce, Reddit and Thomson Reuters. The proximate cause was the introduction of the latest artificial-intelligence tools from start-up Anthropic, which much of the market interpreted as an existential threat to any company that doesn't make physical goods.
The handwringing is overblown. While the AI tools show the potential power of AI in office work, they're not ready for prime time and, in fact, could prove dangerous to the companies that use them. More importantly, these agents remain dependent on the same software and information sources that investors seem to think they will replace.
Once the dust settles, there will be a lot of companies in the software, media, and information sectors with attractive valuations. Private equity buyers are probably licking their chops; Orlando Bravo, the founder of private-equity firm Thoma Bravo, said as much during an interview from Davos last month. Salesforce, the pioneer in cloud software, now trades for just 15 times forward earnings, its lowest price/earnings ratio on record.
The technologies at the root of the selloff all are part of Claude Cowork, a desktop agent that for now is only available on Mac computers. Agents use large-language models to accomplish a complex series of tasks from a simple prompt. For example: "Go through my emails and messages, find all the deliverables I have this week, and create first drafts of them, including any charts and slide decks. Then email the drafts to the team and solicit feedback."
On Wednesday, Anthropic released 10 plugins to Cowork that seek to accomplish these tasks in a variety of areas from sales and finance to legal and customer support. The news expanded the agent disruption worries beyond enterprise software into information services. Thomson Reuters fell 16% Tuesday, S&P Global 11%, and advertising holding company WPP 13%.
Science fiction author Arthur C. Clarke once wrote, "Any sufficiently advanced technology is indistinguishable from magic," and agents seem like magic when they work. But they don't always work. When agents are given a lot of access and privileges, disasters can occur.
All of the AI worries stem from a misunderstanding about how large-language models like Claude and OpenAI's ChatGPT actually work. These are advanced probability machines that make guesses at writing sentences one word at a time, based on the human language in its training data. In the end, it's just trying to sound like the humans in its training data, be it a renowned physicist or a social media troll. It's so good at mimicking humans, that we assign words like "reasoning" and "feeling" to them even though these probability machines do nothing of the sort.
As good as these models are at sounding like an all-knowing person, they also regularly make believable, authoritatively-worded fabrications known as hallucinations. A lot of research has gone into eliminating hallucinations, but it remains an unsolved problem. No one is exactly sure why they happen.
In tiny print at the bottom of its Claude chatbot, Anthropic warns that "Claude is AI and can make mistakes. Please double-check cited sources." I use Claude, ChatGPT, and Google's Gemini every day, and I can vouch for the truth in the Claude warning. In its documentation Anthropic adds, "Users should not rely on Claude as a singular source of truth and should carefully scrutinize any high-stakes advice given by Claude."
Does that sound like a reliable assistant to which you would hand over your computer?
This isn't just theoretical; we already see the dangers of hallucinations in the real world. Shares of Thomson Reuters, which provides news and information services to the legal profession, got clobbered because of the Cowork legal plugin, which promises to "review contracts, triage NDAs, navigate compliance, assess risk, prep for meetings, and draft templated responses."
But lawyers using AI language models to speed up their work have already run into a lot of trouble. HEC business school researcher Damien Charlotin maintains a list of incidents in which lawyers filed AI-written briefs that contained completely fabricated precedents and quotes. He's up to 355 such incidents, with 34 in 2026. Many of these attorneys are liable for fines, professional discipline, and may be subject to malpractice suits from their clients.
"The main thing to know is that Claude can take potentially destructive actions," Anthropic said in the safety section of its Cowork launch announcement. Anthropic also recommends against using Cowork for any regulated workload, such as medical records. Agents are also vulnerable to an unsolved class of cyberattacks known as prompt injections, a threat organizations aren't prepared for.
This isn't to say that agents won't ever perform business tasks without hallucinations and security holes. It's coming one day, just not soon. Anyone using them for mission-critical tasks today will eventually find themselves knee-deep in a catastrophe.
But even then, agents aren't the end of software. On GitHub, the largest code repository, which is owned by Microsoft, Anthropic lists the software currently being used by Claude agents and it's full of all the largest software names. In fact, the Cowork legal plugin uses Microsoft 365, Jira, Slack, and Box software to accomplish its tasks. No one at Anthropic has replicated any of these applications with ones coded by the company's Claude Code agent.
Finally, as information and media businesses shed value in the market, investors would be wise to think through the broader implications. Training AI models requires text, images, and video created by humans so that they can be mimicked. So far, the AI companies have used every book ever written, and as much of the internet as possible. But if AI destroys the sources of human text, then where would they be?
That would be the true apocalypse.
Write to Adam Levine at adam.levine@barrons.com
This content was created by Barron's, which is operated by Dow Jones & Co. Barron's is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
February 06, 2026 02:00 ET (07:00 GMT)
Copyright (c) 2026 Dow Jones & Company, Inc.
Comments