Roblox Will Pay $12 Million to Settle Nevada Child Safety Lawsuit


Popular gaming platform Roblox agreed to pay more than $12 million and implement new safety features as part of a settlement with the state of Nevada. This settlement comes amid several lawsuits accusing the company of an alleged lack of protection of children on the platform. 

The agreement resolves potential litigation over allegations that Roblox failed to adequately safeguard children while they played the online game, Nevada Attorney General Aaron Ford said in a press release on Wednesday. 

As part of the deal, Roblox will spend $10 million over three years to encourage children to engage in non-digital activities, as well as institute age verification for all users. This will include “facial age estimation technology and government-issued ID for age assurance, and will use behavioral monitoring to identify users who may have been aged incorrectly,” according to the press release. 

“The injunctive relief that Roblox has agreed to will give parents the tools they need to protect their children on the platform; institute default protections to block predators from engaging with children; and ensure that messages involving minors are not encrypted,” Ford said in the press release.

Roblox also committed to spending $1 million over two years on a campaign to educate minors and adults about online safety and another $1.5 million to develop a law enforcement liaison position to work with state law enforcement agencies over concerns about the platform. 

Roblox Chief Safety Officer Matt Kaufman said it’s part of the company’s “work to establish a new standard for digital safety.”

“This resolution creates a blueprint for how industry and regulators can work together to protect the next generation of digital citizens,” Kaufman said Thursday. “We have no finish line when it comes to safety.”

Roblox is under significant legal pressure amid more than 140 lawsuits, according to Reuters. The suits, filed in 2025, allege the company knowingly created a gaming platform that allowed child predators to target minors. 

The company also faces lawsuits from state attorneys general in Texas, Kentucky, Louisiana, Iowa, Nebraska, Tennessee and Florida over similar accusations.

Age-based accounts coming soon

Two days before the settlement announcement, Roblox CEO and founder David Baszucki revealed new accounts for younger Roblox users.

Roblox Kids will be available for children between the ages of 5 and 8, and Roblox Select is for those ages 9 to 15. Roblox is reportedly used by nearly half of US children under 16. Children who are older than 16 will be in their own age group, simply called “Roblox.”

Kids and Select accounts would be available in those age groups as determined by Roblox’s age-check technology or by a verified parent.

Unmonitored chat in the game has been a point of criticism for the platform, as it allows predators to chat with children. Kids’ accounts will have chat turned off by default, with limited access to Minimal or Mild games as determined by the platform. Select accounts will have chat with safeguards and access to games with Moderate content, which is described by the platforms as having “moderate violence, light realistic blood, moderate crude humor, unplayable gambling content, and/or moderate fear.”

These new age-based accounts will roll out sometime in early June. 





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link