Anthropic’s AI Assistant Claude Is Now Available in Microsoft Word


Claude, meet Clippy. If you’ve got Microsoft Word, you can now use Anthropic’s AI assistant Claude within the software as an alternative to Copilot, the company announced in a LinkedIn post.

The add-on is now available to Claude customers with Team or Enterprise plans, and is free to try. The feature is currently in beta testing, and Anthropic did not indicate when a broader rollout would occur. 

Companies use the beta period to test new products with a smaller subset of people to discover bugs, gauge usability and get feedback. Then, they can make fixes and hone the product before a wider release.

AI Atlas

Anthropic continues to get Claude into different workplace software tools. Launched in June 2024, the AI assistant is available in Google Workspace programs such as Gmail, Google Calendar and Google Drive. Claude can also be integrated into Slack, the communication and collaboration platform. 

Last week, Anthropic announced that its AI agent tool Claude Cowork is now available on paid plans for both MacOS and Windows.

For many Word users, Claude could be a welcome alternative to Copilot, an AI assistant launched by Microsoft in February 2023. Copilot is reportedly losing ground to competitors, and its ubiquitousness in Windows 11 and many other Microsoft software programs has been a sore spot for some customers.

Copilot isn’t the first Microsoft Word helper to exasperate people. People of a certain age will recall Clippy, a digital assistant built into Word in 1996. Though now considered nostalgic and iconic, Clippy irritated Word users at the time by popping up with often-useless suggestions, and was hard to disable. 

Clippy was no longer enabled by default on April 11, 2001, but it’s now available as a Chrome extension — popping up whenever you visit a web page (for appearances only, as it doesn’t provide any assistance).

In its announcement this week, Anthropic said Claude, like Copilot, can do a variety of tasks. You can create new content and help edit existing documents. For document generation, you can “open your template and describe what you need.” For editing existing documents, you can “highlight a paragraph and tell Claude to tighten it, shift the tone or cut passive voice,” and it can identify broken cross-references.

Anthropic also touted Claude’s ability to work with comments others might add to a document. The company said Claude can read and analyze comments, then respond to them as instructed.

In an example from the announcement, Claude was asked to “summarize what the partner’s counsel changed” in a mutual nondisclosure agreement. Claude then listed several changes made to the NDA, including two that were potential “dealbreakers.” The customer instructed Claude to push back on those changes and send back new contract language to the other party.

There were many comments on Anthropic’s LinkedIn post about the Claude add-in for Word. One person complained that “sometimes Claude decides on its own to generate an MS Office document,” while someone else commented, “Love to see it and was waiting for this release.”





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link