Beyond the Screen: Clickwrap Principles Reach Crypto Kiosks


A recent decision from an Indiana federal court underscores that the principles behind what makes “clickwrap” assent enforceable are not limited to websites and apps found through smartphones and laptops. In Beckett v. Bitcoin Depot, Inc., No. 25-01450 (S.D. Ind. Feb. 26, 2026), the court granted a Bitcoin ATM operator’s motion to compel arbitration, finding that the plaintiff—who had fallen victim to a cryptocurrency scam—assented to the company’s clickwrap terms before completing the transactions.

The ruling is notable because most electronic “clickwrap” contracting cases focus on the issues involving websites or mobile apps. While there was no reason to expect a different analysis in the context of a kiosk, Beckett clarifies that those familiar principles extend into the physical world of kiosk screens and self-service terminals.

The takeaways are clear:

  • First, contracting rigor matters just as much in kiosk environments as it does online. Providers should implement thoughtfully designed user flows that mirror best practices from ecommerce: clear and uncluttered interfaces, conspicuous presentation of terms, affirmative assent mechanisms, and reliable audit logs.
  • Second, and specific to the fact that this was a crypto case, robust anti-fraud warnings can serve a dual purpose. Beyond helping protect consumers, they may also strengthen litigation defenses, particularly on issues of notice, assumption of risk and causation.

The Facts

Bitcoin ATMs (or “BTMs”) are kiosks that allow users to purchase—and sometimes sell—cryptocurrency. Rather than dispensing cash, they typically accept cash or debit card payments and transfer cryptocurrency to a wallet specified by the user, often via QR code.

The plaintiff, a retiree, was targeted in a “tech support” impersonation scam. He was persuaded to withdraw cash from his bank accounts on three separate occasions and use a BTM operated by Bitcoin Depot to transfer funds to a third-party digital wallet controlled by the scammers. This type of scam is common and was the subject of a September 2024 Federal Trade Commission (FTC) consumer alert.

In the end, the funds could not be recovered, and the plaintiff brought suit asserting tort and consumer protection claims and alleging that Bitcoin Depot failed to implement adequate safeguards.

The Contracting Flow

Before completing each transaction, the plaintiff was required to accept Bitcoin Depot’s terms and conditions on-screen. The process included multiple layers of warning and verification:

  • A prominent red-text warning cautioned: “If someone else sent you to this machine and provided you with a QR Code or wallet ID to send funds to, it is most likely a scam.”
  • A follow-up text message warned against sending funds to purported government officials, law enforcement or tech support, and against using third-party QR codes.
  • The user was required to enter a PIN sent via text message.
  • The interface then presented a direct prompt: “ARE YOU BEING SCAMMED?” along with examples of common fraud scenarios and advising users that losses due to fraudulent transactions may not be recoverable.
  • Finally, the user had to confirm that the destination wallet belonged to them; selecting any other option would cancel the transaction.

Despite these warnings, the plaintiff confirmed—incorrectly—that the destination wallet was his own.

The Court’s Ruling

Bitcoin Depot moved to compel arbitration under its terms of service. The court granted the motion, emphasizing that the plaintiff did not dispute that he had assented to the arbitration agreement on three separate occasions. Arguments regarding unconscionability and other enforceability issues were left for the arbitrator to decide.

Final Thoughts

This case reinforces a straightforward but important point: enforceable digital contracting principles apply wherever transactions occur, including at physical kiosks.

At the same time, the case hints at future litigation risk. While Bitcoin Depot secured a procedural win, different facts could lead to closer scrutiny of a provider’s safeguards. Plaintiffs may increasingly attempt to move beyond contract formation and challenge the reasonableness and adequacy of provider’s risk controls and safety messaging. For example, the complaint in Beckett outlines several allegedly “inadequate safeguards,” such as claims that Bitcoin Depot failed to implement transaction limits for first time elderly users, monitor large sequential deposits, or flag certain scenarios like repeated maximum value deposits to the same digital wallet.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link