The best data removal services of 2026: Delete yourself from the internet


It’s impossible to prevent all of your data from being harvested, traded, and sold by third parties, especially considering the amount of time we spend online and the numerous digital services that collect our information by default. That’s not to say you can’t rise up against data brokers storing, sharing, and selling your data. You can also lock down your individual accounts to reduce your data footprint. 

The problem is that vast amounts of data related to you are probably already in the hands of data brokers, who sell it for a profit. Manual removals only go so far and are usually time-consuming processes, but the best data removal services shoulder this task for you. 

Get more in-depth ZDNET tech coverage: Add us as a preferred Google source on Chrome and Chromium browsers.

Best anti-virus deals of the week

Deals are selected by the CNET Group commerce team, and may be unrelated to this article.

What’s the best personal data removal service right now?

My top pick to delete yourself from the internet is Incogni. Owned by VPN provider Surfshark, Incogni tackles data removal and brokers on your behalf with a heavy focus on automation and enforces these requests using applicable data protection laws. Its service has amassed many positive customer reviews. You pay the equivalent of $7.99 per month with an annual plan during Incogni’s current sale.

Also: How to delete yourself from the internet in 2026

Alternatively, DeleteMe is another well-known specialist data removal service that offers individual, business, or family plans, starting at $8.60 per month. I would recommend DeleteMe if you want to submit custom removal requests.

I have conducted extensive research, hands-on testing, and the ZDNET team’s years of experience in data removal practices and techniques to compile my recommendations. Below, you will find my other top choices for the best data removal services in 2026. 

The best data removal services of 2026

Show less

Surfshark’s Incogni is a great service for removing yourself from the internet and negotiating with data brokers, securing its position as my top pick for quick, automated removal. 

Why we like it: Once you’ve signed up, Incogni will send removal requests to a wide array of data brokers. The service enforces these requests in accordance with applicable privacy laws, including the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

The company states that most requests are processed within eight weeks; however, some may require your action. Incogni will also tackle shadow profiles on your behalf. Requests ignored by data brokers will be repeated. Continual requests are sent every 60 to 90 days to mitigate the risk of your information being collected, traded, and sold again. 

The company is focused on improving its custom removal services and expanding its data broker coverage, which now encompasses over 420 data brokers. A recent Deloitte audit confirmed the company’s coverage claims.

Also: Incogni review

Who it’s for: Everyone. Incogni boasts over 400 million data removals completed to date. It also has some of the best customer feedback of all my recommendations. 

Customers report feeling the benefits of reduced spam calls and emails, and that their data is removed promptly. That said, some reports may not provide as much in-depth information as some customers want.

If you’re unsure how much information you need to be removed from the internet, you can opt for the monthly plan and consider a clean-up once or twice a year. There is also a 30-day money-back guarantee, making this service an excellent try-before-you-buy option. 

Who should look elsewhere: This service is best suited for individuals, so businesses may want to explore other options or consider the dedicated Incogni Ironwall service.

Incogni offers monthly or annual plans. If you want to pay once a year, Incogni’s services will cost you the equivalent of $7.99 per month. Family plans start at $15.99 per month. 

Incogni also provides DIY guides if you’d prefer to tackle the task yourself. 

Incogni features: Data broker management | Data removal request follow-ups | Reports | Shadow profile detection | Progress tracking | User education resources, DIY guides | Individual and family plans | 30-day money-back guarantee | Trustpilot rating: 4.4


Read More

Incogni logo

Show Expert Take Show less

Show less

DeleteMe has earned many positive customer reviews, and it’s easy to see why. This user-friendly option will help you remove sensitive and personal information from online sources and data brokers while saving you time and effort. 

Why we like it: Being user-friendly is one of DeleteMe’s priorities, as well as maintaining human influence in data removal tasks. Once you’ve submitted your information, the organization will search for it online and send removal requests to third parties holding your data, including hundreds of websites and data brokers. Named brokers include Whitepages, Spokeo, and BeenVerified. 

Users can also explore other security features, including email and phone masking. In 2024, the service expanded beyond the US, Europe, and Canada. It recently said it is utilizing artificial intelligence to help tackle dark web and data broker data pools. 

In an interview with ZDNET, DeleteMe said it is also exploring vehicle-related data and record removal for its customers as an emerging issue for consumer privacy. Improved Google Street View blurring, Dark Web monitoring, and enhanced identity masking are also in the works. 

Who it’s for: Individuals looking for relatively quick data removal. You will receive a report outlining DeleteMe’s progress within a week. The company says that, on average, 15 public data listings belonging to a searchable subject via Google are removed within “days.” Many customers say the service exceeds expectations.

Who should look elsewhere: Businesses or individuals hoping to tackle data removal themselves. If you are in the latter category, you can also utilize DeleteMe’s guides for additional assistance. 

DeleteMe offers a range of plans. These include a subscription for one person, for one year at $8.60 per month, or two years at $6.67 per month. Other subscriptions include a two-person plan for two years, priced at $13.93 per month. Family plans begin at $27.87 per month on a two-year contract. 

You can ask for a refund, if necessary, before you receive your first report with DeleteMe’s money-back guarantee. 

DeleteMe features: Data broker management | Data deletion request handling | Scanning | User interface | Privacy reports | Custom removal requests | Email and phone masking | Opt-outs | Trustpilot rating: 4.3


Read More

delete me

Show Expert Take Show less

Show less

Privacy Bee can track where your information is and act on your behalf to remove it from companies you don’t trust, as well as data brokers. 

Why we like it: The service checks and monitors search results to remove sensitive information. You can also download a browser extension that displays the relationship of your data with the sites you visit.

The company can also remove you from marketing databases to reduce the volume of spam and mass emails you receive, and take a proactive approach to handling your information. Privacy Bee will leverage applicable laws, including CCPA and GDPR, to enforce removal requests. 

Also: Most secure browsers for privacy

Who it’s for: Anyone who wants comprehensive data deletion with a focus on enforcing data removal from brokers and other third parties. There are also some interesting features, such as digital map house blurring and skilled help escalation for severe data exposures.

Customer reviews suggest that the service is secure and thorough.

Who should look elsewhere: Anyone who wants a budget-friendly option for data removal, as it can quickly become expensive. 

While the entry-level plan is now priced at around $8/month, which brings it in line with some competitors, mass marketing opt-outs and house blurring only come with the Pro plan at $18 per month. If you need expedited data removal, the Privacy Bee Signature plan is approximately $67 per month, which includes the eyes of analysts on your data and expedited removals. 

Privacy Bee featuresCompany checks | Data deletion | Data broker management | Privacy browser extension | User dashboard | Mass marketing opt-outs | 24/7 monitoring | Search engine cleanup | Trustpilot rating: 3.1


Read More

PrivacyBee

Show Expert Take Show less

Show less

DuckDuckGo has a longstanding and respected attitude to consumer privacy. It began as a search engine and later extended to a dedicated browser. Now, the company has released a VPN and additional privacy-related services, including a personal data removal service. 

Why we like it: You can take advantage of a VPN, data removal service, and ID protection in one package. 

While I have been a fan of DuckDuckGo for many years, this recommendation has a few caveats. Trustpilot ratings are low, and some users have complained about the VPN’s server selection and lack of advanced features. Still, you can use the VPN on up to five devices simultaneously.

There are issues that need ironing out, but if your focus is on choosing a company with a pedigree in user privacy with data removal services as a bonus, check it out. I think that once the browser is more established — with recent upgrades cleaning up the browser and improving user experience — we can expect to see drastic improvements overall in DuckDuckGo’s security packages.

Also: Best malware removal tools & best antivirus

Who it’s for: DuckDuckGo Privacy Pro is worth considering if you want an all-in-one option that combines a virtual private network (VPN), data removal service, and identity theft protection. 

Who should look elsewhere? Anyone outside of the US, the EU, or Canada, as this service is not currently fully available internationally. If you want data removal, you must be a US resident.

The combined service costs $10 per month or $100 for an annual plan. The first seven days are free.

DuckDuckGo Privacy Pro features: No-log search engine | VPN | Privacy browser | Data removal service | Identity theft restoration | Free trial | Trustpilot rating: 2.1


Read More

DuckDuckGo Privacy Pro

Show Expert Take Show less

Show less

Kanary’s old setup was a proven service for cleaning up your online information. A company that’s now geared towards the tech-savvy generation, Kanary has recently shifted from broad family plans and data removal offerings to a mobile-first solution focused on automation called Kanary Copilot. 

Why we like it: The company has pivoted to offering a free service to combat online stalking and doxxing. With the press of a button, this automated app will attempt to identify and address data leaks, process removal requests, and resolve security issues on your behalf. 

So, why is it free? The company says: “We don’t want cost to be a barrier, especially for younger people just establishing their online footprint. We believe online safety should not be limited to executives or celebrities.”

The service is new, but additional integrations will be added as the app is developed further. 

Who it’s for: Individuals who want to try out free, mobile-first data removal. Enterprise plans are available to limit exposures, but they are expensive. 

Who should look elsewhere: Anyone who isn’t a US resident, as the service is not yet available outside of the United States. The app is also only available on iOS.

Existing users will be able to transition from their old accounts this year. Customer feedback indicates that Kanary’s service is reliable, reports are clear and concise, and the service is valuable for the money. Let’s hope the same can be said for the new venture as it grows. 

Kanary Copilot features: Mobile user dashboard | Frequent scanning | Enterprise plans | Data removal request management | Strong security standards | Copilot | Combats doxxing 


Read More

kanary copilot

Show Expert Take Show less

Show less

Reputation Defender by Norton is a tailored service offered to individuals, professionals, executives, and businesses. 

Why we like it: This service differs from our other recommendations. It’s a personal offering that focuses on managing and cleaning up reputations. This could include online data removal and information deletion held by data brokers as well as people-search websites. 

Norton’s offering includes data management, reputation management, personal branding assistance, privacy alerts, regular scanning, and search result management. 

Also: Best password manager for business

Norton has recently rolled out a “reputation card,” available on its home page, which uses the Sentiment AI assistant to show you how others may view you online. It’s quite an interesting service and as it is free, I recommend that you try it out.

Who it’s for: Individuals who need tailored, custom protection for their reputation and associated assets, or need ongoing, custom protection for their business. The company offers a free consultation, and customers say the support team handles sensitive situations well.

Who should look elsewhere: Anyone who does not need individual consultancy and protection. If you’re looking for quick data removal and you don’t need advanced, expert help, seek another solution. 

Prices for Reputation Defender by Norton are available upon request, as the work is personalized to your circumstances. 

Reputation Defender features: Personal service | Reputation management | Personal branding | Search result monitoring | Data deletion | Personal consultation | Privacy threat reports | Tackles news articles | Trustpilot rating: 4.1


Read More

Reputation Defender by Norton

Show Expert Take Show less

Choose this top data removal service…

If you want…

Incogni

My top pick for an affordable solution to protect yourself and your data. Incogni provides a one-stop-shop solution for data protection and managing data brokers. It requires no technical knowledge and is suitable for those who do not want to tackle requests manually.

DeleteMe

To have information removed from search engines. You submit the information you want removed, and DeleteMe will do the rest. DeleteMe is a great choice for in-depth data removal and custom requests.

Privacy Bee

To choose what companies you trust. PrivacyBee allows you to select companies you are comfortable holding your data, as well as select organizations that you want to delete your information from their records. 

DuckDuckGo Privacy Pro

A combined service including a VPN, data removal, and identity theft restoration. It’s offered by a trustworthy company with a long history of protecting user privacy, although some issues still need to be ironed out.

Kanary Copilot

A free data removal service. While only available on iOS at the moment, you can take advantage of Kanary’s new, free app to tackle data leaks and automate data removal requests. It might not be as thorough as paid solutions, but it is a start.

Reputation Defender by Norton

A reputation manager. Reputation Defender is best suited to high-profile individuals and businesses that need constant data leak monitoring and reputation protection. You will need to reach out to them directly to discuss your needs.


Show more

When you’re on the lookout for a service able to reduce your online footprint, consider the following factors:

  • Region: Some data removal services work best in specific regions, whereas others can cross the divides caused by different data protection laws. Every provider I’ve listed can handle some degree of US-based requests, but if you want to go further afield, ensure the service you are interested in provides this coverage. 
  • Advanced protection: Do you just want a data removal service, or are other security features important to you? Consider if you also want a service known for reducing spam, for example, or one that also provides you with data breach alerts or works well with a VPN. 
  • Reporting: Do you want alerts on each individual data broker tackled, or do you want weekly check-ins? Take a look at the company’s typical practices. 
  • Free trials and features: A free report may help you ascertain the scale of the problem and just how public your information is. Think about signing up for a free report before you make a purchase decision. 
  • Budget: Many of us have numerous subscriptions to everything from streaming services to insurance, and so think about whether a data removal service will work in your monthly or annual budget. 


Show more

When choosing the best services for deleting yourself from the internet, I considered factors such as:

  • Price: While selecting the best services, I aimed to offer a range of options, including affordable subscriptions and plans. Depending on your needs, signing up can be a long-term investment, so I have included a variety of data removal solutions with different price points and contract periods. 
  • Removal: I focused on services that can scan, monitor, and check online databases and data broker repositories for your personal information and then work on your behalf to have it removed. After all, if you’re paying for a service, you should expect the challenging parts of the process to be handled for you.
  • Protection: The services listed above also include protective features that may reduce the likelihood of personalized spam calls and phishing emails, as well as trolling, stalking, or identity theft. You may also want, for example, to combine automatic data removal with data breach alerts and antivirus software. 
  • Frequency: My recommended services can conduct frequent scans on your behalf. While you may want one check and deletion session, our information is constantly changing hands, which means that your data could reappear online unless removals are monitored and enforced.
  • Reports: The services I recommend will often provide reports with each scan to keep you updated on where your data was found, what has been deleted, and potentially what your next steps should be. 
  • Free trials, reports, and features: I have included data removal solutions that offer free reports and snapshots of your data exposure, allowing you to make the right choice before signing up for a plan. While free trials are uncommon, some of my recommendations offer free, limited plans or trial periods.


Show more

Latest information on data removal services

  • Canadian business process outsourcing company Telus Digital confirmed it faced a security incident after threat actors claimed to have stolen nearly 1 petabyte of data in a multi-month breach.
  • A whistleblower claims that the US Department of Government Efficiency (DOGE) violated the law by saving copies of Social Security Numbers to a thumb drive with plans to share the information with his private employer.
  • Bell Ambulance, Inc., an emergency medical transport company based in Milwaukee, Wisconsin, disclosed a data breach after an unauthorized individual accessed data within its network, impacting over 200,000 people.
  • Incogni released new research exploring the data-hungry practices of popular apps and governments.

You can take numerous steps to stop your name from appearing in internet search results and through engines, including Google and Bing, but it can be a complicated and time-consuming process. Below are some steps to help you get started.

  • Use a search engine: Your first action should be to type in your name, nicknames, and online handles into search engines. This step will reveal the information that anyone can easily find on you and can help you plan your next steps like which companies to contact for data removal or what accounts to delete.
  • Lock down or delete social media accounts: Deletion is the nuclear option, but most social media platforms will have the option to stop your profile from appearing in search engine results. As our profiles — even if they are publicly limited — contain our photos, full name, and more, removing them from search engine queries can help reduce our online footprint. For step-by-step guides, visit FacebookInstagram, and X.
  • Delete old, unused accounts: Whether it’s shopping sites, social media networks, or forums, each service you use — or have used — may tie your online identity to your personally identifiable information (PII). All of this could be at risk if a data breach occurs. If you do not want to use a dedicated service, consider going through your email and password managers to find active accounts. You will need to access them and request removal manually.
  • Clean up forum posts: Forums can often be overlooked, but if someone finds out the handles you commonly use, they may be able to find content connected to you. This could now be utterly irrelevant to the person you are today or embarrassing if exposed. Delete old forum posts and preferably remove your accounts entirely. 
  • Contact webmasters: If you have old accounts that do not have auto-delete features, contact webmasters directly to have your profiles and data deleted. This process will likely be more straightforward if you are in an area covered by regulations such as the E.U.’s GDPR.
  • Request that people finder websites delete your information: People finder websites can be used as “search engines” to look up someone based on their name, phone number, and other personal information, which can be a privacy nightmare. Opting out and forcing the removal of your information from these organizations, which may buy it from data brokers, can be a challenging process to perform manually as it may require contacting each service individually to negotiate. If these organizations prove difficult, deletion could also require understanding applicable privacy and data protection laws to enforce your requests. Consider using a service such as Incogni or DeleteMe to do the legwork for you.
  • Deactivate email accounts: Our email accounts tend to be the core platform that ties your digital profile together, but once they’re gone, they’re gone. When you are ready, delete your email accounts, which will break the common threads between your online services. Only take this step if you are sure.


Show more

You can, but the process is limited, and your request may be rejected if the company doesn’t believe there are grounds for removal. 

You will need to contact Google using this form, requesting to remove information you see or asking to prevent information from appearing in Google search results. Removal requests can also be made for:

  • Exposed personal identifiable information (PII)
  • Explicit images, including adult content
  • Involuntary, fake pornography
  • Images of minors
  • Information from websites with “exploitive” removal practices, such as those that demand payment

In some cases — for example, a request to remove links to law enforcement statements or media articles concerning an individual and a prosecution — Google may refuse, as such information could be in the public interest. If Google refuses, it will provide a reason for its decision.


Show more

There are numerous ways you can protect your identity online. Experts recommend securing personal information, locking down your social media accounts to friends and connections only, using antivirus software, regularly updating programs and software, and changing your passwords regularly. 

Furthermore, if you find “clones” of your identity such as a fake Facebook or dating app profile, ensure you report the fake account to associated online services.

If you suspect your data has been leaked online, use the Have I Been Pwned service to see if you have been involved in any data breaches.


Show more

You’ll have to follow some basic steps to remove your name from social media. First, change your name to a nickname or surname that isn’t linked to your true name. This should ensure that photos or linked content will also change.

Next, switching all of your content to private can keep it away from search engines and individual search queries. You can also delete all of your accounts and potentially file a Google request to remove content connected to you. This will not always be accepted, especially if such information is considered in the public interest or a deletion request is believed to be without true cause.


Show more

The majority of search engines will log your search queries, and some will use this information to tailor ads and recommendations. If you want to keep your search queries hidden, I recommend using DuckDuckGo. The service which offers a free search engine that does not log your activities or use its browser for extra privacy. 

DuckDuckGo is a privacy-focused company that does not store your information and also allows you to anonymously use modern online tools including AI chatbots.


Show more

The easiest way is to visit Troy Hunt’s HaveIBeenPwned to see the vast troves of data posted online due to company data breaches. Once you have entered your email address, the search engine will reveal if you have been involved in any third-party information leaks. You should consider checking every so often to ensure you are aware of any new data breaches.


Show more

There’s no easy way and unfortunately it is often outside our control.

Instead of chasing removal, the best thing you can do is to see what information has been leaked and make it redundant. For example, if an email and password combination for an online service has been exposed, change it immediately for that service — as well as any other platforms using the same set — and never use that combination again. Using multi-factor authentication can also prevent your online accounts from being compromised. 


Show more

It is worth it as online threats, phishing, and impersonation are now permanent problems in our lives. This may be especially true for anyone who grew up oversharing and before we understood that once something is online, it is usually there forever. If you want to mask problematic information and cut the links that connect you between different online services, investing in a personal data removal service — at least, for a while — is worth the cost. 


Show more

Latest updates

  • March 2026: In ZDNET’s March update, we confirmed our top picks and added more industry news.
  • September 2025: In ZDNET’s September update, we added a visualization chart to showcase data removal service comparisons. 
  • August 2025: In ZDNET’s August update, we performed editorial changes and added new information related to our top and alternative product picks. We also removed EasyOptOuts from our alternatives.
  • July 2025: In ZDNET’s July update, we performed editorial changes and news updates.
  • June 2025: In ZDNET’s June update, we performed editorial changes and updated information on pricing and deals.
  • April 2025: In our April update, we overhauled our recommended services and completed substantial copy changes, including a new news section and notes on tariffs.

Other data removal services worth considering

Show less

Optery is a great option if you want a free report before signing up for a data removal service, which includes a scan of your Google search results. If you proceed, plans with data removal included start at $3.25 per month. Business plans are also available. Use the code Xi8TJRBw for a 20% discount during Labor Day sales.


Read More

Optery

Show Expert Take Show less

Show less

Norton’s Privacy Monitor service focuses on removing your information from people search websites, including your name, contact information, and date of birth, with support agents managing the process on your behalf. The service is available as an add-on to Norton 360 with LifeLock plans.


Read More

Norton Privacy Monitor

Show Expert Take Show less

Show less

Your digital rights.org is a worthwhile resource if you want to undertake data removal yourself, but need help with the first steps. It doesn’t cost you anything except your time, in which you will need to search for organizations and send them data removal requests manually. 


Read More

Your digital rights.org

Show Expert Take Show less

Show less

Onerep is an AI-backed data removal service. Conducting an initial scan to check where your name may appear online is free. Onerep can then attempt to remove information from data brokers. Data points targeted for removal include personal profiles, addresses, and income information. Pricing begins at $8.33 per month.


Read More

onerep

Show Expert Take Show less

Show less

Your personal data is everywhere, and scammers know it. Incogni removes your information from 420+ data brokers before it can be sold or abused. No forms, no hassle, just peace of mind. And if you spot your data exposed anywhere else, send us the link and we’ll handle it from there. Stay ahead of identity theft with Incogni. 


Read More

logo-1280x720.png

Show Expert Take Show less

We hope that you’ve found our guide on the best data removal services helpful. If you’re looking for more security-related recommendations, we’ve also listed the best antivirus apps of the year for personal protection, alongside the best identity theft protection and credit monitoring services around. 





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


In May 2024, we released Part I of this series, in which we discussed agentic AI as an emerging technology enabling a new generation of AI-based hardware devices and software tools that can take actions on behalf of users. It turned out we were early – very early – to the discussion, with several months elapsing before agentic AI became as widely known and discussed as it is today. In this Part II, we return to the topic to explore legal issues concerning user liability for agentic AI-assisted transactions and open questions about existing legal frameworks’ applicability to the new generation of AI-assisted transactions.

Background: Snapshot of the Current State of “Agents”[1]

“Intelligent” electronic assistants are not new—the original generation, such as Amazon’s Alexa, have been offering narrow capabilities for specific tasks for more than a decade. However, as OpenAI’s CEO Sam Altman commented in May 2024, an advanced AI assistant or “super-competent colleague” could be the killer app of the future. Later, Altman noted during a Reddit AMA session: “We will have better and better models. But I think the thing that will feel like the next giant breakthrough will be agents.” A McKinsey report on AI agents echoes this sentiment: “The technology is moving from thought to action.” Agentic AI represents not only a technological evolution, but also a potential means to further spread (and monetize) AI technology beyond its current uses by consumers and businesses. Major AI developers and others have already embraced this shift, announcing initiatives in the agentic AI space. For example:  

  • Anthropic announced an updated frontier AI model in public beta capable of interacting with and using computers like human users;
  • Google unveiled Gemini 2.0, its new AI model for the agentic era, alongside Project Mariner, a prototype leveraging Gemini 2.0 to perform tasks via an experimental Chrome browser extension (while keeping a “human in the loop”);
  • OpenAI launched a “research preview” of Operator, an AI tool that can interface with computers on users’ behalf, and launched beta feature “Tasks” in ChatGPT to facilitate ongoing or future task management beyond merely responding to real time prompts;
  • LexisNexis announced the availability of “Protégé,” a personalized AI assistant with agentic AI capabilities;
  • Perplexity recently rolled out “Shop Like a Pro,” an AI-powered shopping recommendation and buying feature that allows Perplexity Pro users to research products and, for those merchants whose sites are integrated with the tool, purchase items directly on Perplexity; and
  • Amazon announced Alexa+, a new generation of Alexa that has agentic capabilities, including enabling Alexa to navigate the internet and execute tasks, as well as Amazon Nova Act, an AI model designed to perform actions within a web browser.

Beyond these examples, other startups and established tech companies are also developing AI “agents” in this country and overseas (including the invite-only release of Manus AI by Butterfly Effect, an AI developer in China). As a recent Microsoft piece speculates, the generative AI future may involve a “new ecosystem or marketplace of agents,” akin to the current smartphone app ecosystem.  Although early agentic AI device releases have received mixed reviews and seem to still have much unrealized potential, they demonstrate the capability of such devices to execute multistep actions in response to natural language instructions.

Like prior technological revolutions—personal computers in the 1980s, e-commerce in the 1990s and smartphones in the 2000s—the emergence of agentic AI technology challenges existing legal frameworks. Let’s take a look at some of those issues – starting with basic questions about contract law.

Note: This discussion addresses general legal issues with respect to hypothetical agentic AI devices or software tools/apps that have significant autonomy. The examples provided are illustrative and do not reflect any specific AI tool’s capabilities.

Automated Transactions and Electronic Agents

Electronic Signatures Statutory Law Overview

A foundational legal question is whether transactions initiated and executed by an AI tool on behalf of a user are enforceable.  Despite the newness of agentic AI, the legal underpinnings of electronic transactions are well-established. The Uniform Electronic Transactions Act (“UETA”), which has been adopted by every state and the District of Columbia (except New York, as noted below), the federal E-SIGN Act, and the Uniform Commercial Code (“UCC”), serve as the legal framework for the use of electronic signatures and records, ensuring their validity and enforceability in interstate commerce. The fundamental provisions of UETA are Sections 7(a)-(b), which provide: “(a) A record or signature may not be denied legal effect or enforceability solely because it is in electronic form; (b) A contract may not be denied legal effect or enforceability solely because an electronic record was used in its formation.” 

UETA is technology-neutral and “applies only to transactions between parties each of which has agreed to conduct transactions by electronic means” (allowing the parties to choose the technology they desire). In the typical e-commerce transaction, a human user selects products or services for purchase and proceeds to checkout, which culminates in the user clicking “I Agree” or “Purchase.”  This click—while not a “signature” in the traditional sense of the word—may be effective as an electronic signature, affirming the user’s agreement to the transaction and to any accompanying terms, assuming the requisite contractual principles of notice and assent have been met.

At the federal level, the E-SIGN Act (15 U.S.C. §§ 7001-7031) (“E-SIGN”) establishes the same basic tenets regarding electronic signatures in interstate commerce and contains a reverse preemption provision, generally allowing states that have passed UETA to have UETA take precedence over E-SIGN.  If a state does not adopt UETA but enacts another law regarding electronic signatures, its alternative law will preempt E-SIGN only if the alternative law specifies procedures or requirements consistent with E-SIGN, among other things.

However, while UETA has been adopted by 49 states and the District of Columbia, it has not been enacted in New York. Instead, New York has its own electronic signature law, the Electronic Signature Records Act (“ESRA”) (N.Y. State Tech. Law § 301 et seq.). ESRA generally provides that “An electronic record shall have the same force and effect as those records not produced by electronic means.” According to New York’s Office of Information Technology Services, which oversees ESRA, “the definition of ‘electronic signature’ in ESRA § 302(3) conforms to the definition found in the E-SIGN Act.” Thus, as one New York state appellate court stated, “E-SIGN’s requirement that an electronically memorialized and subscribed contract be given the same legal effect as a contract memorialized and subscribed on paper…is part of New York law, whether or not the transaction at issue is a matter ‘in or affecting interstate or foreign commerce.’”[2] 

Given US states’ wide adoption of UETA model statute, with minor variations, this post will principally rely on its provisions in analyzing certain contractual questions with respect to AI agents, particularly given that E-SIGN and UETA work toward similar aims in establishing the legal validity of electronic signatures and records and because E-SIGN expressly permits states to supersede the federal act by enacting UETA.  As for New York’s ESRA, courts have already noted that the New York legislature incorporated the substantive terms of E-SIGN into New York law, thus suggesting that ESRA is generally harmonious with the other laws’ purpose to ensure that electronic signatures and records have the same force and effect as traditional signatures.  

Electronic “Agents” under the Law

Beyond affirming the enforceability of electronic signatures and transactions where the parties have agreed to transact with one another electronically, Section 2(2) of UETA also contemplates “automated transactions,” defined as those “conducted or performed, in whole or in part, by electronic means or electronic records, in which the acts or records of one or both parties are not reviewed by an individual.” Central to such a transaction is an “electronic agent,” which Section 2(6) of UETA defines as “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.” Under UETA, in an automated transaction, a contract may be formed by the interaction of “electronic agents” of the parties or by an “electronic agent” and an individual. E-SIGN similarly contemplates “electronic agents,” and states: “A contract or other record relating to a transaction in or affecting interstate or foreign commerce may not be denied legal effect, validity, or enforceability solely because its formation, creation, or delivery involved the action of one or more electronic agents so long as the action of any such electronic agent is legally attributable to the person to be bound.”[3] Under both of these definitions, agentic AI tools—which are increasingly able to initiate actions and respond to records and performances on behalf of users—arguably qualify as “electronic agents” and thus can form enforceable contracts under existing law.[4]

AI Tools and E-Commerce Transactions

Given this existing body of statutory law enabling electronic signatures, from a practical perspective this may be the end of the analysis for most e-commerce transactions. If I tell an AI tool to buy me a certain product and it does so, then the product’s vendor, the tool’s provider and I might assume—with the support of UETA, E-SIGN, the UCC, and New York’s ESRA—that the vendor and I (via the tool) have formed a binding agreement for the sale and purchase of the good, and that will be the end of it unless a dispute arises about the good or the payment (e.g., the product is damaged or defective, or my credit card is declined), in which case the AI tool isn’t really relevant.

But what if the transaction does not go as planned for reasons related to the AI tool? Consider the following scenarios:

  • Misunderstood Prompts: The tool misinterprets a prompt that would be clear to a human but is confusing to its model (e.g., the user’s prompt states, “Buy two boxes of 101 Dalmatians Premium dog food,” and the AI tool orders 101 two-packs of dog food marketed for Dalmatians).
  • AI Hallucinations: The user asks for something the tool cannot provide or does not understand, triggering a hallucination in the model with unintended consequences (e.g., the user asks the model to buy stock in a company that is not public, so the model hallucinates a ticker symbol and buys stock in whatever real company that symbol corresponds to).
  • Violation of Limits: The tool exceeds a pre-determined budget or financial parameter set by the user (e.g., the user’s prompt states, “Buy a pair of running shoes under $100” and the AI tool purchases shoes from the UK for £250, exceeding the user’s limit).
  • Misinterpretation of User Preference: The tool misinterprets a prompt due to lack of context or misunderstanding of user preferences (e.g., the user’s prompt states, “Book a hotel room in New York City for my conference,” intending to stay near the event location in lower Manhattan, and the AI tool books a room in Queens because it prioritizes price over proximity without clarifying the user’s preference).

Disputes like these begin with a conflict between the user and a vendor—the AI tool may have been effective to create a contract between the user and the vendor, and the user may then have legal responsibility for that contract.  But the user may then seek indemnity or similar rights against the developer of the AI tool.

Of course, most developers will try to avoid these situations by requiring user approvals before purchases are finalized (i.e., “human in the loop”). But as desire for efficiency and speed increases (and AI tools become more autonomous and familiar with their users), these inbuilt protections could start to wither away, and users that grow accustomed to their tool might find themselves approving transactions without vetting them carefully. This could lead to scenarios like the above, where the user might seek to void a transaction or, if that fails, even try to avoid liability for it by seeking to shift his or her responsibility to the AI tool’s developer.[5] Could this ever work? Who is responsible for unintended liabilities related to transactions completed by an agentic AI tool?

Sources of Law Governing AI Transactions

AI Developer Terms of Service

As stated in UETA’s Prefatory Note, the purpose of UETA is “to remove barriers to electronic commerce by validating and effectuating electronic records and signatures.” Yet, the Note cautions, “It is NOT a general contracting statute – the substantive rules of contracts remain unaffected by UETA.”  E-SIGN contains a similar disclaimer in the statute, limiting its reach to statutes that require contracts or other records be written, signed, or in non-electronic form (15 U.S.C. §7001(b)(2)). In short, UETA, E-SIGN, and the similar UCC provisions do not provide contract law rules on how to form an agreement or the enforceability of the terms of any agreement that has been formed.

Thus, in the event of a dispute, terms of service governing agentic AI tools will likely be the primary source to which courts will look to assess how liability might be allocated. As we noted in Part I of this post, early-generation agentic AI hardware devices generally include terms that not only disclaim responsibility for the actions of their products or the accuracy of their outputs, but also seek indemnification against claims arising from their use. Thus, absent any express customer-favorable indemnities, warranties or other contractual provisions, users might generally bear the legal risk, barring specific legal doctrines or consumer protection laws prohibiting disclaimers or restrictions of certain claims.[6]

But what if the terms of service are nonexistent, don’t cover the scenario, or—more likely—are unenforceable? Unenforceable terms for online products and services are not uncommon, for reasons ranging from “browsewrap” being too hidden, to specific provisions being unconscionable. What legal doctrines would control during such a scenario?

The Backstop: User Liability under UETA and E-SIGN

Where would the parties stand without the developer’s terms? E-SIGN allows for the effectiveness of actions by “electronic agents” “so long as the action of any such electronic agent is legally attributable to the person to be bound.” This provision seems to bring the issue back to the terms of service governing a transaction or general principles of contract law. But again, what if the terms of service are nonexistent or don’t cover a particular scenario, such as those listed above. As it did with the threshold question of whether AI tools could form contracts in the first place, UETA appears to offer a position here that could be an attractive starting place for a court. Moreover, in the absence of express language under New York’s ESRA, a New York court might apply E-SIGN (which contains an “electronic agent” provision) or else find insight as well by looking at UETA and its commentary and body of precedent if the court isn’t able to find on-point binding authority, which wouldn’t be a surprise, considering that we are talking about technology-driven scenarios that haven’t been possible until very recently.

UETA generally attributes responsibility to users of “electronic agents”, with the prefatory note explicitly stating that the actions of electronic agents “programmed and used by people will bind the user of the machine.” Section 14 of UETA (titled “Automated Transaction”) reinforces this principle, noting that a contract can be formed through the interaction of “electronic agents” “even if no individual was aware of or reviewed the electronic agents’ actions or the resulting terms and agreements.” Accordingly, when automated tools such as agentic AI systems facilitate transactions between parties who knowingly consent to conduct business electronically, UETA seems to suggest that responsibility defaults to the users—the persons who most immediately directed or initiated their AI tool’s actions. This reasoning treats the AI as a user’s tool, consistent with the other UETA Comments (e.g., “contracts can be formed by machines functioning as electronic agents for parties to a transaction”).

However, different facts or technologies could lead to alternative interpretations, and ambiguities remain. For example, Comment 1 to UETA Section 14 asserts that the lack of human intent at the time of contract formation does not negate enforceability in contracts “formed by machines functioning as electronic agents for parties to a transaction” and that “when machines are involved, the requisite intention flows from the programming and use of the machine” (emphasis added).

This explanatory text has a couple of issues. First, it is unclear about what constitutes “programming” and seems to presume that the human intention at the programming step (whatever that may be) is more-or-less the same as the human intention at the use step[7], but this may not always be the case with AI tools. For example, it is conceivable that an AI tool could be programmed by its developer to put the developer’s interests above the users’, for example by making purchases from a particular preferred e-commerce partner even if that vendor’s offerings are not the best value for the end user. This concept may not be so far-fetched, as existing GenAI developers have entered into content licensing deals with online publishers to obtain the right for their chatbots to generate outputs or feature licensed content, with links to such sources. Of course, there is a difference between a chatbot offering links to relevant licensed news sources that are accurate (but not displaying appropriate content from other publishers) versus an agentic chatbot entering into unintended transactions or spending the user’s funds in unwanted ways. This discrepancy in intention alignment might not be enough to allow the user to shift liability for a transaction from a user to a programmer, but it is not hard to see how larger misalignments might lead to thornier questions, particularly in the event of litigation when a court might scrutinize the enforceability of an AI vendor’s terms (under the unconscionability doctrine, for example). 

Second, UETA does not contemplate the possibility that the AI tool might have enough autonomy and capability that some of its actions might be properly characterized as the result of its own intent. Looking at UETA’s definition of “electronic agent,” the commentary notes that “As a general rule, the employer of a tool is responsible for the results obtained by the use of that tool since the tool has no independent volition of its own.” But as we know, technology has advanced in the last few decades and depending on the tool, an autonomous AI tool might one day have much independent volition (and further UETA commentary admits the possibility of a future with more autonomous electronic agents). Indeed, modern AI researchers have been contemplating this possibility even before rapid technological progress began with ChatGPT.

Still, Section 10 of UETA may be relevant to some of the scenarios from our bulleted selection of AI tool mishaps listed above, including misunderstood prompts or AI hallucinations. UETA Section 10 (titled “Effect of Change or Error”) outlines the possible actions a party may take when discovering human or machine errors or when “a change or error in an electronic record occurs in a transmission between parties to a transaction.” The remedies outlined in UETA depend on the circumstances of the transaction and whether the parties have agreed to certain security procedures to catch errors (e.g., a “human in the loop” confirming an AI-completed transaction) or whether the transaction involves an individual and a machine.[8]  In this way, the guardrails integrated into a particular AI tool or by the parties themselves play a role in the liability calculus. The section concludes by stating that if none of UETA’s error provisions apply, then applicable law governs, which might include the terms of the parties’ contract and the law of mistake, unconscionability and good faith and fair dealing.

* * *

Thus, along an uncertain path we circle back to where we started: the terms of the transaction and general contract law principles and protections. However, not all roads lead to contract law. In our next installment in this series, we will explore the next logical source of potential guidance on AI tool liability questions: agency law.  Decades of established law may now be challenged by a new sort of “agent” in the form of agentic AI…and a new AI-related lawsuit foreshadows the issues to come.


[1] In keeping with common practice in the artificial intelligence industry, this article refers to AI tools that are capable of taking actions on behalf of users as “agents” (in contrast to more traditional AI tools that can produce content but not take actions). However, note that the use of this term is not intended to imply that these tools are “agents” under agency law.

[2] In addition, the UCC has provisions consistent with UETA and E-SIGN providing for the use of electronic records and electronic signatures for transactions subject to the UCC. The UCC does not require the agreement of the parties to use electronic records and electronic signatures, as UETA and E-SIGN do.

[3] Under E-SIGN, “electronic agent” means “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part without review or action by an individual at the time of the action or response.”

[4] It should be noted that New York’s ESRA does not expressly provide for the use of “electronic agents,” yet does not prohibit them either.  Reading through ESRA and the ESRA regulation, the spirit of the law could be construed as forward-looking and seems to suggest that it supports the use of automated systems and electronic means to create legally binding agreements between willing parties. Looking to New York precedent, one could also argue that E-SIGN, which contains provisions about the use of “electronic agents”, might also be applicable in certain circumstances to fill the “electronic agent” gap in ESRA. For example, the ESRA regulations (9 CRR-NY § 540.1) state: “New technologies are frequently being introduced. The intent of this Part is to be flexible enough to embrace future technologies that comply with ESRA and all other applicable statutes and regulations.”  On the other side, one could argue that certain issues surrounding “electronic agents” are perhaps more unsettled in New York.  Still, New York courts have found ESRA consistent with E-SIGN.  

[5] Since AI tools are not legal persons, they could not be liable themselves (unlike, for example, a rogue human agent could be in some situations). We will explore agency law questions in Part III.

[6] Once agentic AI technology matures, it is possible that certain user-friendly contractual standards might emerge as market participants compete in the space. For example, as we wrote about in a prior post, in 2023 major GenAI providers rolled out indemnifications to protect their users from third-party claims of intellectual property infringement arising from GenAI outputs, subject to certain carve-outs.

[7] The electronic “agents” in place at the time of UETA’s passage might have included basic e-commerce tools or EDI (Electronic Data Interchange), which is used by businesses to exchange standardized documents, such as purchase orders, electronically between trading partners, replacing traditional methods like paper, fax, mail or telephone. Electronic tools are generally designed to explicitly perform according to the user’s intentions (e.g., clicking on an icon will add this item to a website shopping cart or send this invoice to the customer) and UETA, Section 10, contains provisions governing when an inadvertent or electronic error occurs (as opposed to an abrogation of the user’s wishes).

[8] For example, UETA Section 10 states that if a change or error occurs in an electronic record during transmission between parties to a transaction, the party who followed an agreed-upon security procedure to detect such changes can avoid the effect of the error, if the other party who didn’t follow the procedure would have detected the change had they complied with the security measure; this essentially places responsibility on the party who failed to use the agreed-upon security protocol to verify the electronic record’s integrity.

Comments to UETA Section 10 further explain the context of this section: “The section covers both changes and errors. For example, if Buyer sends a message to Seller ordering 100 widgets, but Buyer’s information processing system changes the order to 1000 widgets, a “change” has occurred between what Buyer transmitted and what Seller received. If on the other hand, Buyer typed in 1000 intending to order only 100, but sent the message before noting the mistake, an error would have occurred which would also be covered by this section.”  In the situation where a human makes a mistake when dealing with an electronic agent, the commentary explains that “when an individual makes an error while dealing with the electronic agent of the other party, it may not be possible to correct the error before the other party has shipped or taken other action in reliance on the erroneous record.”



Source link