Anthropic Reins In Subscribers’ Unlimited AI Use for OpenClaw


Anthropic over the weekend told subscribers they’d have to pay up for heavy use of its Claude AI models to power third-party agents like OpenClaw

Users with monthly subscriptions can still use Claude models, including Opus, Sonnet and Haiku, through these third-party agents. But you’ll have to pay via Anthropic’s API or use a “pay-as-you-go option” that will be billed separately from your Claude subscription payment.

“The $20/month all-you-can-eat buffet just closed,” wrote AI product manager Aakash Gupta on X

At the same time, Anthropic recently announced new features that bring some of the things that made OpenClaw so popular into Claude itself. Claude can use your computer, even if you’re not at it, for example. 

Why this policy matters 

There has been growing tension between OpenAI and Anthropic, recently inflamed by the controversies involving contracts with the US Defense Department. But there is also tension between users who want to run autonomous AI agents constantly and the AI labs that are trying to control costs by managing the tasks their models are used for. 

AI Atlas

Claude is a chatbot that was created to be prompted by humans, not for millions of AI agents to use it for workflows. These agent tools, like Manis and OpenClaw, require much more power to run and burn through tokens faster than regular human chatting. Anthropic has already taken steps to address the demand that heavy agent users bring, like a five-hour session cap during peak periods for the models.

“We’ve been working to manage demand across the board, but these tools put an outsized strain on our systems,” Anthropic wrote in its email to customers.

OpenAI has been all-in on agentic tools. Early this year, the AI company hired Peter Steinberger, the creator of OpenClaw, with the aim of bringing AI agents to a broad audience. Steinberger was vocal about his critiques of Anthropic’s new policy, taking to X over the weekend. 

“Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source,” he wrote

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

The future of agent compute power 

Friction between heavy agent users and AI companies is likely to get worse. These AI agent tools are extremely powerful: they can run for hours, take actions across apps like Gmail, Slack and iMessage, and work autonomously much longer and faster than a human could. Because of this power, they are far more costly and require far more power to run compared to a human prompting a bot. It’s likely that AI companies will increasingly push these compute costs onto heavy users through price increases or steps like those taken by Anthropic.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link