Experts Keep Predicting AI Doom, But Mark Cuban Sees A Different Future






It seems that every other news headline and podcast these days focuses on AI doom-and-gloom. Claude has, after all, said it would kill humans to save itself. For every hot take on its usefulness, there are a dozen others leaning heavily on a fraught future of AI-led dystopia. The news cycle is already rife with triggering stories, but it’s always important to step back and critically consider who is framing the article or podcast in a particular way. It’s usually for clicks. It’s also possible that the person spouting said AI horrors lacks the imagination or track record in innovation, job creation, and cultural impact to offer a meaningful perspective. Mark Cuban, the Pittsburgh-native, boomer billionaire, is more bullish and outwardly optimistic about AI. It makes sense that such a successful entrepreneur would see the business-building potential of a tool like AI. 

In a recent interview with Dr. Eric Bricker on the AHealthcareZ YouTube Channel, Cuban speaks of a world where the “kid in a basement” can leverage LLMs to great effect, suggesting future CEOs hiding amongst us. Sure, this isn’t a groundbreaking take, but coming from the likes of Cuban, it could inspire Gen Alpha to create the next big thing. Experts and podcast pundits might focus on potential negatives, but they view this democratization of computation pessimistically. For a clearer perspective here, it’s not “AI will save us” versus “AI will kill us.” It’s about who gets leverage and whether they can stay in control as the leverage expands to bigger audiences, driving larger advances.

Mark Cuban sees opportunity in AI

We’re living through an era of rapid change, with a line drawn in the sand between the pro-AI camp and the AI doomers. This split reflects the intense disruption caused by tools like ChatGPT, Claude, and Gemini. Even those on the inside who help train these large language models have concerns that AI can be a dangerous tool. These worries have always been there, though. In 2023, when the technology was fresh, some experts suggested pausing or halting AI development altogether. We’ve come so far since then, and thankfully, these LLMs haven’t led to the apocalypse. If anything, big changes are coming to ChatGPT, as these chatbots continue to get better and better. 

Tapping into Cuban’s mindset, we can see a very optimistic, hopeful view of technology: consider how many young people have learned new things, coded apps, started businesses, and improved certain aspects of their lives since 2023. His championing of the technology’s ability to provide free, tailor-made curricula to that proverbial “kid in the basement” supplants the doom-and-gloom.

Cuban doesn’t claim that AI is perfect — he even acknowledges those dreaded hallucinations — but he’s betting the net effect will vastly improve the landscape for learning and research. The entrepreneurial side is where Cuban’s perspective is most evident, where he imagines someone using LLMs to help with patents, speeding up what was historically quite a lengthy process. Overall, pro-AI advocates like Cuban are just as important as the skeptics in balancing the narrative and hopefully inspiring innovation in the process.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link