Siri Gets Smarter: Apple Taps Multiple Chatbots for AI Upgrade


Apple’s Siri is trying to make friends in the AI world. Apple is taking the unusual step of connecting Siri with multiple AI chatbots, including Google’s Gemini, Anthropic’s Claude and OpenAI’s ChatGPT, Bloomberg reported Thursday, citing unnamed Apple employees.

A representative from Apple didn’t immediately respond to a request for comment.

This isn’t the first time we’ve heard about Siri connecting with other AI systems to bolster its capabilities, as Apple continues to delay the revamped version of the voice assistant, now expected out in late 2026. At first, Apple partnered only with OpenAI for ChatGPT capabilities in 2024. Then we saw reports that Apple was turning to Google’s Gemini for similar capabilities.

Now, it looks like Apple is opening up Siri partnerships with all kinds of AI while maintaining its proprietary voice assistant. If you have a popular chatbot installed on your iPhone, Apple wants you to be able to use it in Siri. 

Bloomberg says these multiple chat options will work via Extensions that let Siri users enable what connections they want Siri to have, including apps like supported chatbots. When talking to Siri, users would specify the chatbot they want to tap into to get extra information and services.

While we don’t know what AI models Apple would allow Siri to use, the company has a lot of chatbots in its App Store, including Meta AI, Grok and Microsoft’s Copilot. Amazon’s Alexa/Alexa Plus is there too, although I’m doubtful Amazon would let Apple leverage such a direct competitor to Siri at this time. 

Why is Apple giving Siri so many chatbot connections?

Dark figures stand around in front of a Siri icon and "Hey Siri" text.

Apple is taking Siri in a very interesting direction, but I have questions about how it will work. 

Sebastien Bozon/Getty Images

This makes sense from Apple’s perspective. Chatbots can rise and fall from glory fast — look at how quickly OpenAI ditched video generator Sora this week. By including all of them, Apple doesn’t have to worry about which AI is the flavor of the month. If one suddenly falls out of favor or gets cancelled, Siri suffers minimal losses.

The move also means Apple can overcome or skip some of the challenges it’s had in developing its own Siri AI, including a standalone Siri app and more integrated versions of Siri for the smart home

AI Atlas

As Bloomberg points out, this could also be lucrative for Apple. To use their AI of choice in Siri, people would probably have to subscribe to the AI service through Apple’s own store, which means Apple gets a cut of the sale. 

But there’s something else I’m curious about: How much will Siri’s abilities expand when it’s working with third-party chatbots? For example, I can use Alexa Plus to order an Uber or control my smart heater. Could I do the same with Siri if the two are connected? Or will Siri be able to deliver advice directly from any chatbot in its own voice? That could get awkward if the AI starts giving bad information or explicit content, as we’ve seen Grok do

We’ll likely find out more on June 8, when Apple holds its Worldwide Developers Conference and discusses iOS 27. If you like Siri more than most voice assistants (we’ve found it’s especially popular with Gen Z), hold tight until then. 





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link