Pixel 10 Pro Owners Prove The Phone’s Flashlight Can Damage The Device







Flashlights are just one of those things that every smartphone today is simply expected to have and isn’t really talked about much. Conversations about smartphones are mostly about stuff like the camera or the processor. Unfortunately for Google Pixel 10 Pro and Pro XL users, flashlights are more than that, and not in a good way. A growing number of them have been posting on Reddit about an issue over the past few months. They claim the flashlight is generating so much heat that it basically melts and burns through the plastic lens covering the LED module. This isn’t the first time Pixel phone users have reported major problems with basic functionality this year, either, though the previous ones were software-related.

The new one is hardware. As you can see in the image below, shared by one of the users on Reddit, there is a charred, dark hole right in the middle of the flashlight, even as the rest of the diffuser looks intact. Normally, the whole flashlight has a more uniform whitish color, certainly with no deep black spot in the middle. The same user also stated they could literally burn paper by holding it against the flashlight, with it producing a visible trail of smoke.

The Redditor wasn’t alone, although not every report is that dramatic. Several users have described the light as simply getting uncomfortably hot during longer use. But that heat turns into a problem when it is mistakenly left on too long, as some users learned the hard way when they fell asleep with the flashlight on or kept it that way in their pocket for extended periods.

So what’s actually going on?

Obviously, with reports like these making the rounds, other Pixel owners are getting a little concerned about their flashlights doubling as ray guns, too. Several users have noticed a small orange or yellow dot sitting in the center of the lens and assumed the worst. However, that’s nothing to worry about, as it’s just the LED emitter showing through the diffuser. It looks like that in every unit. That said, the darker, more irregular marks from some photos don’t look normal at all.

Now, this problem could’ve been prevented with thermal safeguards that dim or shut off the flashlight before temperatures get too high, or simply by turning it off automatically after a fixed period. But the Pixel 10 Pro doesn’t appear to have something like this. Also worth noting is that Google, with the Android 16 QPR3 update, added an adjustable brightness slider for the flashlight, allowing users to crank the intensity higher than the default. But there’s no confirmed connection between that and these reports.

So far, Google hasn’t made any official statement about these claims. The PixelCommunity account on Reddit has reached out to individual users via DM, but some owners say their warranty claims have been denied. For now, it’s better to be safe than sorry, so don’t leave your flashlight on for extended periods, especially in enclosed spaces like a pocket or bag. But if you do spot a dark mark on the lens that definitely wasn’t there before, contact Google support and try to get it fixed under warranty. All in all, the problem isn’t as major as the worst smartphone recall in history, but the optics aren’t great for a flagship either.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link