Why Google’s Nano Banana Pro Image Model Has Such A Weird Name






Naming AI products is a bit hit-or-miss. Some names sound as if they were polished in a branding lab for six months, while others feel as though they were just pulled from a hat. Claude has a certain elegance. Gemini is fine. ChatGPT, on the other hand, is a rubbish name and only became familiar through brute force when it was suddenly absolutely everywhere

Nano Banana, Google Gemini’s AI image generator that enables anyone to create realistic-looking pictures, is called Gemini 3 Pro Image Preview in Google’s technical documentation. However, the name “Nano Banana” is both more official and less official than you might think. Google openly calls it Nano Banana Pro — and even Nano Banana 2, now — but that wasn’t the original plan. 

Nano Banana Pro has such a weird name because that moniker was never intended to be taken seriously. The team needed a temporary name for Arena.ai (then called LMArena), the crowdsourced model-testing platform where systems are compared anonymously. The codename wasn’t chosen until the last minute. Product Manager Naina Raisinghani was pushed to come up with something on the spot and suggested Nano Banana. It was a combination of two of her nicknames. “Some of my friends call me Naina Banana, and others call me Nano because I’m short and I like computers. So I just smushed my two nicknames together,” Naina revealed on Google’s blog, The Keyword.

Nano Banana quickly caught on

Despite Google’s attempts to keep its identity secret on Arena.ai, some people were quick to speculate that the highly rated new image generation and editing tool was a Google product. It was initially uploaded to Arena.ai on August 12, 2025. Within days, users were sharing their AI-generated creations on social media. After a week of speculation, a couple of X posts fueled users’ suspicions. Product Lead for Google AI Studio, Logan Kilpatrick, posted a banana emoji, and Naina Raisinghani, the developer behind the name, shared a picture of a banana gaffer-taped to a wall. Nano Banana was officially launched on August 26, 2025, upstaging ChatGPT as the most popular AI image generator.

It’s not the first tech product with “banana” in its name. We might be more familiar with Apple, Blackberry, and Raspberry Pi, but you can also purchase a bananaphone — a banana-shaped Bluetooth headset to pair with your smartphone. There’s also a 2019 research paper with a BANANAS algorithm, which stands for Bayesian Optimization with Neural Architectures for Neural Architecture Search. (You have to respect the contrivance even if it doesn’t quite work.) Tech companies are still naming things after fruit. OpenAI internally used “Strawberry” for the project that became o1, and Meta is currently working on an AI model nicknamed “Avocado.”

Nano Banana may not have been meant as the official name, but it stuck because people liked it. Companies spend fortunes chasing that kind of stickiness, and Google stumbled into it. The model got noticed, the odd codename was memorable, and Google was smart enough not to crush the joke with a committee-approved replacement.







Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link