The Tasalli
Select Language
search
BREAKING NEWS
Claude AI Mistakes Spark Major Anthropic User Backlash
Business Apr 15, 2026 · min read

Claude AI Mistakes Spark Major Anthropic User Backlash

Editorial Staff

The Tasalli

728 x 90 Header Slot

Summary

Anthropic, a major player in the artificial intelligence industry, is currently dealing with a wave of complaints from its users. Many people who use the Claude AI chatbot say the system has become less helpful and more prone to errors. These users claim the AI is failing to follow instructions and is taking shortcuts that lead to mistakes. This backlash is happening at a critical time for Anthropic, as the company is valued at $380 billion and is preparing to sell shares to the public for the first time.

Main Impact

The reported drop in performance is a serious problem for Anthropic’s reputation. The company has always marketed itself as being more honest and reliable than its competitors. However, many developers now feel that the company changed how the AI works without being clear about it. If professional users stop trusting Claude for complex work, Anthropic could lose the massive growth it has seen over the last year. This situation also highlights a bigger problem in the AI world: the struggle to find enough computing power to keep up with millions of new users.

Key Details

What Happened

The trouble started when heavy users noticed that Claude was no longer performing well on difficult tasks. Developers who use "Claude Code" to write software found that the AI was making sloppy mistakes. It turns out that Anthropic quietly changed a setting called the "effort" level. By default, the AI was moved to a "medium effort" setting. This change was meant to save "tokens," which are the small units of data the AI processes. Processing fewer tokens saves the company money and uses less computing power, but it also makes the AI less thorough.

Important Numbers and Facts

Anthropic has grown at a very fast pace recently. Its yearly revenue jumped from $9 billion at the end of 2025 to $30 billion today. The company is now valued at $380 billion. Despite this financial success, there are signs that the company is struggling to handle its own growth. Anthropic has faced several system outages lately and has had to limit how much people can use the AI during busy times of the day. Meanwhile, competitors like OpenAI claim that Anthropic did not secure enough server space to handle its new customers.

Background and Context

Running a large AI model requires a huge amount of electricity and specialized computer chips. These resources are very expensive and hard to get. As more people start using AI, companies have to decide between giving every user the best possible experience or saving resources so the system doesn't crash. Anthropic recently gained many new users after a public disagreement with the U.S. government. Many people switched from ChatGPT to Claude because they liked Anthropic's focus on safety and privacy. Now, those same users are worried that the company is cutting corners to save money on computer costs.

Public or Industry Reaction

The reaction from the tech community has been sharp. Stella Laurenzo, a top AI expert at the chip company AMD, shared an analysis showing that Claude has become "unusable" for hard engineering jobs. She noted that the AI now tries to finish tasks too quickly without looking at all the necessary information. Other experts from companies like Microsoft have also complained on social media. They say that even when they tell the AI to try its hardest, it still ignores instructions and repeats the same mistakes. Anthropic executives have tried to defend the changes, saying they were responding to feedback from users who thought the AI was originally using too much data.

What This Means Going Forward

To fix the problem, Anthropic says it will start moving business and team users back to a "high effort" setting by default. This will make the AI smarter again, but it might also make it slower. The company is also working on a brand-new model called "Mythos." This new version is supposed to be much more powerful than anything they have released before. However, Mythos is also very large and expensive to run. Some experts wonder if Anthropic even has enough computing power to let everyone use this new model when it is finally ready.

Final Take

Anthropic is learning that being a leader in AI comes with high expectations. While saving on computer costs is a smart business move, doing so at the expense of quality can drive away the very developers who made the company successful. To stay on top, the company must find a way to balance its massive growth with the high-quality performance its users expect.

Frequently Asked Questions

Why is Claude AI performing worse than before?

Anthropic lowered the default "effort" level of the AI to save on computing power and data usage. This makes the AI faster and cheaper to run, but it also makes it more likely to make mistakes on hard tasks.

What is Claude Code?

Claude Code is a specialized tool made by Anthropic that helps software developers write and fix computer code. It is one of the products most affected by the recent performance changes.

Is Anthropic running out of computing power?

There is a lot of speculation that the company is facing a shortage of servers and chips. While Anthropic denies they are purposely making the AI worse, they have admitted to using settings that consume less power to manage the high demand from users.