Acer Swift 16 AI (2026) Review: I Love the Giant OLED Display, but Am Less Enamored With the Oversize Touchpad


Acer Swift 16 AI laptop on a marble coffee table in front of a gray sofa

Pros

  • Giant, gorgeous 16-inch OLED display
  • Incredibly thin and light for its size
  • Strong overall performance from Intel Panther Lake CPU
  • Quiet and cool operator

Cons

  • Poor audio output from underpowered speakers
  • Huge haptic touchpad is so big that it gets in the way
  • Some flex to the thin aluminum top and bottom panels
  • Cannot expand memory or storage

The Acer Swift 16 AI gets a bump from Intel Lunar Lake on last year’s model to Panther Lake this year. While application and especially graphics performance have improved, pricing has also gone up, which is not unique to Acer’s laptops, thanks to the global RAM shortage. Pricing for this year’s models is near the point where the benefit of the integrated Intel B390 GPU with its 12 Xe cores starts to lose its shine because laptops with dedicated Nvidia RTX graphics cost roughly the same or not much more.

The other change Acer made to the Swift 16 AI is adding a gigantic haptic touchpad that comes with pen support and an included pen. I’m generally a huge fan of huge haptic touchpads, but the Swift 16 AI’s is a case of too much of a good thing. And my biggest criticism of last year’s model still applies to this year’s version: the speakers stink. And that’s a shame given the entertainment prospects of the roomy 16-inch OLED display.

The Swift 16 AI remains a great work laptop. The huge, 16-inch display provides plenty of space for multitasking productivity, and its strong color performance, combined with the very capable integrated Panther Lake GPU, also lends the system some appeal for creators looking for a big-screen laptop with a thin design and easy carrying weight. At or near the price of my test system, however, you can find a 16-inch OLED laptop backed by RTX graphics. For creators, I recommend the Lenovo Yoga Pro 9i 16 Aura Edition, which offers a 16-inch OLED display with RTX 5050 graphics for $1,950 at Best Buy.

Acer Swift 16 AI (SF16-71T-73P1)

Price as reviewed $1,800
Display size/resolution 16-inch 2880×1800 120Hz touch OLED
CPU Intel Core Ultra X7 358H
Memory 32GB LPDDR5-9600
Graphics Intel Arc B390 (12 Xe3 cores)
Storage 1TB SSD
Ports USB-C Thunderbolt 4 (x2), USB-A 3.2 (x2), HDMI 2.1, microSD card slot, combo audio
Networking Wi-Fi 7 and Bluetooth 6.0
Operating system Windows 11 Home
Weight 3.3 pounds (1.5 kg)

Acer sells three models of the Swift 16 AI, and my test model sits in the middle of the series. It costs $1,800 at Acer and features an Intel Core Ultra X7 358H CPU, 32GB of RAM, Intel Arc B390 graphics and a 1TB SSD. It’s based on a 16-inch OLED display with a 2,880×1,800-pixel resolution, smooth 120Hz refresh rate and touch support.

The entry-level model costs $1,600 at Acer and is currently discounted to $1,550 at Best Buy. It’s the same as my test model, except it has half the memory at 16GB. It’s the only one of the three currently available at Best Buy.

The top-end model costs $1,199 at Acer, which is the same as my test model, but bumps you up to a Core Ultra X9 388H processor.

The Acer Swift 16 AI starts at £1,599 in the UK. I found a product page for the Swift 16 AI on Acer’s Australia site, but no pricing was available.

Acer Swift 16 AI top cover in the old, dull gray color

Acer made a late change to the Swift 16 AI, swapping the dull gray chassis with silver accents that I tested for a darker charcoal gray color and gold accents.

Matt Elliott/CNET

Color change for the better

Acer sent me a Swift 16 AI in a rather drab gray color with silver accents, but has since made a change in the design — and it looks to my eye that it’s a change for the better. Instead of safe corporate gray, the 2026 models of the Swift 16 AI will come outfitted in a darker gray, almost charcoal chassis with gold accents and an Acer logo on the top cover. From photos, the updated color scheme adds a little more personality to the system — less corporate and more consumer.

Acer Swift 16 AI laptop in the updated charcoal and gold color scheme against a light blue background

The Acer Swift 16 AI shown in its updated charcoal and gold color scheme.

Matt Elliott/CNET

Like last year’s model, this year’s edition boasts a thin, all-aluminum chassis that’s exceptionally light for its size. It weighs just 3.3 pounds, which is slightly lighter than last year’s Swift 16 AI, which weighed 3.4 pounds. This year’s version weighs the same as the 15-inch MacBook Air and has a larger screen. 

As much as I like being able to slip the Swift 16 AI in my laptop bag — and it comes with a protective sleeve — the laptop feels almost too thin. Or at least the aluminum material used for the laptop feels too thin. There’s some flex in the top and bottom panels that I’d be more willing to accept in, say, Acer’s budget Aspire series, but I’m looking for a finer fit and finish when the price approaches $2,000. However, without the uninterrupted aluminum expenses of the top and bottom panels, the keyboard deck feels more rigid.

Acer Swift 16 AI keyboard deck with updated color scheme

The Acer Swift 16 AI’s design will feature much darker gray than the dull gray color that I received.

Matt Elliott/CNET

I also like getting a haptic touchpad at this price, so I was excited to see that Acer added a large touchpad with haptic feedback for this update. Acer calls it “the world’s largest haptic touchpad,” and I’m not here to argue with that claim. It’s absolutely massive, measuring 6.9 inches wide by 4.3 inches tall. The problem I found with it is that it runs right to the front edge of the laptop, leaving no border for helping with palm rejection. The borderless front edge, combined with its gigantic size, had me accidentally bumping against or resting on its surface, resulting in unintended cursor jumps and interrupted scrolling.

Acer Swift 16 AI haptic touchpad

In any color, the Acer Swift 16 AI’s haptic touchpad is huge.

Matt Elliott/CNET

Acer includes an MPP2.5 active stylus with the laptop for use with the touchpad, but not the screen. The display has touch support for tapping and swiping with your fingertip, but doesn’t have pen support. No, the pen is for sketching and drawing or scribbling notes and e-signatures on the surface of the giant touchpad. If that type of pen support matches your workflow, then you’re likely to enjoy the enormous touchpad. For the rest of us, a more sensibly proportioned touchpad is probably preferred.

Likewise, Excel jockeys will enjoy the inclusion of a number pad, but I’d rather sacrifice its narrow keys for the ability to have the rest of the keyboard centered below the display rather than positioned to the left. The keys themselves have a predictably shallow travel due to the thinness of the laptop, but I liked typing on the Swift 16 AI. The keys offer firm, springy feedback.

Acer Swift 16 AI included active pen

Acer includes an active pen with the Swift 16 AI to write on the touchpad but not the touch display.

Matt Elliott/CNET

Display and speakers stay the same, webcam gets worse

Acer runs back the same display from last year and for good reason: it’s a fantastic 16-inch OLED panel. It offers a crisp, 2,880×1,800-pixel resolution with vivid color and deep blacks. On my display tests with a Spyder X Elite colorimeter, the Swift 16 AI showed excellent color accuracy, covering 100% of the sRGB and P3 gamuts and 94% of AdobeRGB. It also hit a peak brightness of 403 nits, providing bright whites to go with the effective zero-nit black levels for superb contrast. The one drawback to the display is its glossy finish; you’ll find yourself bobbing and weaving to get around glare and reflections at times.

Acer Swift 16 AI has a 16-inch OLED display

The 16-inch OLED display is big and bright but glossy.

Acer also runs back the same setup underpowered, downward-firing stereo speakers that sound tinny and flat. It’s disappointing to have such a big display that’s great for watching shows and movies paired with such underwhelming speakers. It’s too bad that Acer couldn’t find room on this large laptop for a quad-speaker array with fuller sound.

The webcam takes a step back with this year’s model, moving down from a 1440p camera to 1080p. Images and videos are grainier than what I experienced with last year’s model, especially in low-light environments. The camera does have an IR sensor for facial recognition logins via Windows Hello, which is the only biometric option because the laptop lacks a fingerprint reader.

Acer Swift 16 AI ports on the right side include a microSD card slot

New to this year’s version of the Acer Swift 16 AI: a microSD card slot.

Matt Elliott/CNET

The port selection offers two USB-C Thunderbolt 4 and two USB-A ports along with an HDMI port and headphone jack and adds a microSD card slot in a nod to creators eyeing the Swift 16 AI.

Inside, there’s no room for expansion. The RAM is soldered to the motherboard and therefore not user-replaceable. And there’s only a single M.2 slot, which is occupied by a 1TB SSD.

Acer Swift 16 AI laptop's interior components

The Acer Swift 16 AI has no room inside for expansion.

Matt Elliott/CNET

Acer Swift 16 AI performance and battery life

Based on the 16-core (four performance cores, eight efficient cores and four low-power efficient cores) Intel Core Ultra X7 358H processor, 32GB of fast 9,600MHz RAM and Intel’s integrated Arc B390 graphics that has 12 Xe3 GPU cores, the Swift 16 AI proved itself to be a very capable performer in lab testing. It showed big leaps in multi-core performance from last year’s model on our Geekbench 6 and Cinebench 2024 tests, and even bigger gains in 3D graphics performance. 

It provided playable framerates on our Shadow of the Tomb Raider and Guardians of the Galaxy benchmarks at 1080p. On more demanding titles like Assassin’s Creed Shadows and F1 24, however, you’ll need to employ Intel’s XeSS Frame Generation to get to 60 frames per second or lower the resolution or quality settings. Still, for a thin-and-light laptop with an iGPU, getting this level of 3D performance is a boon. Also a boon: the Swift 16 AI stays remarkably cool and quiet, even under heavy load.

The Swift 16 AI’s result on our YouTube streaming battery drain test was good, but I was expecting more. It ran for 13.5 hours, which is fantastic for a big-screen, high-res OLED laptop, but it was only about an hour longer than last year’s model.

Should I buy the Acer Swift 16 AI?

You should get it if you’re looking for a thin-and-light, big-screen OLED laptop. You won’t find many 16-inch models that are lighter than the 3.3-pound Swift 16 AI. And with its modern Panther Lake CPU and ample RAM, it delivers strong performance and lengthy battery life. For only $150 more, though, I still like the Lenovo Yoga Pro 9i 16 Aura Edition for its better build quality and added RTX graphics muscle, even if it is more than a pound heavier.

The review process for laptops, desktops, tablets and other computerlike devices consists of two parts: performance testing under controlled conditions in the CNET Labs and extensive hands-on use by our expert reviewers. This includes evaluating a device’s aesthetics, ergonomics and features. A final review verdict is a combination of both objective and subjective judgments. 

The list of benchmarking software we use changes over time as the devices we test evolve. The most important core tests we’re currently running on every compatible computer include Primate Labs Geekbench 6, Cinebench R23, PCMark 10 and 3DMark Fire Strike Ultra

A more detailed description of each benchmark and how we use it can be found on our How We Test Computers page. 

Geekbench 6 CPU (multi-core)

Apple MacBook Pro 14 (M5, late 2025) 17946Lenovo Yoga Pro 9i 16 Aura Edition 17748MSI Prestige 14 Flip AI Plus 16607Dell XPS 14 16197Acer Swift 16 AI (2026) 16187MSI Katana 15 HX B14W 14587Asus Vivobook S 15 14058HP OmniBook X Flip 14 12747Acer Swift 16 AI (2025) 10993

Note: Longer bars indicate better performance

Geekbench 6 CPU (single-core)

Apple MacBook Pro 14 (M5, late 2025) 4263Lenovo Yoga Pro 9i 16 Aura Edition 2980MSI Prestige 14 Flip AI Plus 2896Acer Swift 16 AI (2026) 2850HP OmniBook X Flip 14 2823Dell XPS 14 2813MSI Katana 15 HX B14W 2738Acer Swift 16 AI (2025) 2716Asus Vivobook S 15 2446

Note: Longer bars indicate better performance

Cinebench 2024 CPU (multi-core)

MSI Katana 15 HX B14W 1220Lenovo Yoga Pro 9i 16 Aura Edition 1218Apple MacBook Pro 14 (M5, late 2025) 1118Asus Vivobook S 15 963Acer Swift 16 AI (2026) 915Dell XPS 14 700MSI Prestige 14 Flip AI Plus 692HP OmniBook X Flip 14 636Acer Swift 16 AI (2025) 533

Note: Longer bars indicate better performance

Cinebench 2024 CPU (single-core)

Apple MacBook Pro 14 (M5, late 2025) 199Lenovo Yoga Pro 9i 16 Aura Edition 130Dell XPS 14 124Acer Swift 16 AI (2026) 121Acer Swift 16 AI (2025) 121MSI Katana 15 HX B14W 117MSI Prestige 14 Flip AI Plus 115HP OmniBook X Flip 14 114Asus Vivobook S 15 107

Note: Longer bars indicate better performance

3DMark Steel Nomad

Lenovo Yoga Pro 9i 16 Aura Edition 2278MSI Katana 15 HX B14W 2207MSI Prestige 14 Flip AI Plus 1527Acer Swift 16 AI (2026) 1440Dell XPS 14 1286Apple MacBook Pro 14 (M5, late 2025) 1129Acer Swift 16 AI (2025) 679Asus Vivobook S 15 496HP OmniBook X Flip 14 456

Note: Longer bars indicate better performance

3DMark Fire Strike Ultra

MSI Katana 15 HX B14W 6285Lenovo Yoga Pro 9i 16 Aura Edition 6247MSI Prestige 14 Flip AI Plus 3491Acer Swift 16 AI (2026) 3205Dell XPS 14 3019Acer Swift 16 AI (2025) 2185HP OmniBook X Flip 14 1916

Note: Longer bars indicate better performance

PCMark 10 Pro Edition

Lenovo Yoga Pro 9i 16 Aura Edition 9754Acer Swift 16 AI (2026) 9219Dell XPS 14 8981MSI Prestige 14 Flip AI Plus 8761HP OmniBook X Flip 14 7199MSI Katana 15 HX B14W 7024Acer Swift 16 AI (2025) 6855

Note: Longer bars indicate better performance

Shadow of the Tomb Raider (Highest @ 1920 x 1080)

Lenovo Yoga Pro 9i 16 Aura Edition 159MSI Katana 15 HX B14W 155Acer Predator Helios Neo 16 PHN16-71 136Acer Nitro 16 AN16-41-R3ZV 126Dell XPS 14 9440 84MSI Prestige 14 Flip AI Plus 64Acer Swift 16 AI (2026) 58M5 Apple MacBook Pro 14 56Dell XPS 14 50

Note: Longer bars indicate better performance

Guardians of the Galaxy (High @1920 x 1080)

Acer Predator Helios Neo 16 PHN16-71 165MSI Katana 15 HX B14W 159Lenovo Yoga Pro 9i 16 Aura Edition 155Acer Nitro 16 AN16-41-R3ZV 128Dell XPS 14 9440 108Acer Swift 16 AI (2026) 67Dell XPS 14 64MSI Prestige 14 Flip AI Plus 44

Note: Longer bars indicate better performance

The Riftbreaker GPU (1920 x 1080)

MSI Katana 15 HX B14W 231.99Lenovo Yoga Pro 9i 16 Aura Edition 222.47Acer Nitro V 16S AI 217.77Acer Nitro 16 AN16-41-R3ZV 193.65Dell XPS 14 9440 118.43Dell XPS 14 113.13Acer Swift 16 AI (2026) 100.44MSI Prestige 14 Flip AI Plus 80.74

Note: Longer bars indicate better performance

Assasin’s Creed Shadows (1920×1080 @ High)

MSI Katana 15 HX B14W 53Lenovo Yoga Pro 9i 16 Aura Edition 48MSI Prestige 14 Flip AI Plus 27Acer Swift 16 AI (2026) 25Dell XPS 14 24

Note: Longer bars indicate better performance

F1 24 (1920×1080 @ Ultra High)

MSI Katana 15 HX B14W 104Lenovo Yoga Pro 9i 16 Aura Edition 76Acer Swift 16 AI (2026) 34Dell XPS 14 33MSI Prestige 14 Flip AI Plus 25

Note: Longer bars indicate better performance

Online streaming battery drain test (in minutes)

MSI Prestige 14 Flip AI Plus 25 hr, 18 minM5 Apple MacBook Pro 14 22 hr, 59 minAsus Vivobook S 15 15 hr, 26 minDell XPS 14 14 hr, 42 minAcer Swift 16 AI (2026) 13 hr, 34 minAcer Swift 16 AI (2025) 12 h 20 minLenovo Yoga Pro 9i 16 Aura Edition 11 hr, 33 minHP OmniBook X Flip 14 9 hr, 1 minMSI Katana 15 HX B14W 6 hr, 14 min

Note: Longer bars indicate better performance

System configurations

Acer Swift 16 AI (2026) Windows 11 Home; Intel Core Ultra X7 358H; 32GB DDR5 RAM; Intel Arc B390 Graphics; 1TB SSD
Dell XPS 14 Windows 11 Home; Intel Core Ultra X7 358H; 32GB DDR5 RAM; Intel Arc B390 Graphics; 1TB SSD
MSI Prestige 14 Flip AI Plus Windows 11 Home; Intel Core Ultra X7 358H; 32GB DDR5 RAM; Intel Arc B390 Graphics; 1TB SSD
Acer Swift 16 AI (2025) Windows 11 Home; Intel Core Ultra 7 256V; 16GB DDR5 RAM; Intel Arc 140V Graphics; 1TB SSD
Asus Vivobook S 15 Windows 11 Home; Qualcomm Snapdragon X Elite X1E-78-100; 16GB DDR5 RAM; Qualcomm Adreno Graphics; 1TB SSD
HP OmniBook X Flip 14 Windows 11 Home; AMD Ryzen AI 7 350; 32GB DDR5 RAM; AMD Radeon 860M Graphics; 1TB SSD
Lenovo Yoga Pro 9i 16 Aura Edition Windows 11 Home; Intel Core Ultra 9 285H; 32GB DDR5 RAM; Nvidia GeForce RTX 5050; 1TB SSD
MSI Katana 15 HX B14W Windows 11 Home; Intel Core i7-14650HX; 16GB DDR5 RAM; Nvidia GeForce RTX 5050; 512GB SSD
Acer Predator Helios Neo 16 PHN16-71 Windows 11 Home; Intel Core i5-13500HX; 16GB DDR5 RAM; Nvidia GeForce RTX 4050; 512GB SSD
Acer Nitro 16 AN16-41-R3ZV Windows 11 Home; AMD Ryzen 5 7640HS; 16GB DDR5 RAM; Nvidia GeForce RTX 4050; 512GB SSD
M5 Apple MacBook Pro 14 Apple MacOS Tahoe 26.0.1; Apple M5 (10-core CPU, 10-core GPU); 16GB LPDDR5; 1TB SSD





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


In May 2024, we released Part I of this series, in which we discussed agentic AI as an emerging technology enabling a new generation of AI-based hardware devices and software tools that can take actions on behalf of users. It turned out we were early – very early – to the discussion, with several months elapsing before agentic AI became as widely known and discussed as it is today. In this Part II, we return to the topic to explore legal issues concerning user liability for agentic AI-assisted transactions and open questions about existing legal frameworks’ applicability to the new generation of AI-assisted transactions.

Background: Snapshot of the Current State of “Agents”[1]

“Intelligent” electronic assistants are not new—the original generation, such as Amazon’s Alexa, have been offering narrow capabilities for specific tasks for more than a decade. However, as OpenAI’s CEO Sam Altman commented in May 2024, an advanced AI assistant or “super-competent colleague” could be the killer app of the future. Later, Altman noted during a Reddit AMA session: “We will have better and better models. But I think the thing that will feel like the next giant breakthrough will be agents.” A McKinsey report on AI agents echoes this sentiment: “The technology is moving from thought to action.” Agentic AI represents not only a technological evolution, but also a potential means to further spread (and monetize) AI technology beyond its current uses by consumers and businesses. Major AI developers and others have already embraced this shift, announcing initiatives in the agentic AI space. For example:  

  • Anthropic announced an updated frontier AI model in public beta capable of interacting with and using computers like human users;
  • Google unveiled Gemini 2.0, its new AI model for the agentic era, alongside Project Mariner, a prototype leveraging Gemini 2.0 to perform tasks via an experimental Chrome browser extension (while keeping a “human in the loop”);
  • OpenAI launched a “research preview” of Operator, an AI tool that can interface with computers on users’ behalf, and launched beta feature “Tasks” in ChatGPT to facilitate ongoing or future task management beyond merely responding to real time prompts;
  • LexisNexis announced the availability of “Protégé,” a personalized AI assistant with agentic AI capabilities;
  • Perplexity recently rolled out “Shop Like a Pro,” an AI-powered shopping recommendation and buying feature that allows Perplexity Pro users to research products and, for those merchants whose sites are integrated with the tool, purchase items directly on Perplexity; and
  • Amazon announced Alexa+, a new generation of Alexa that has agentic capabilities, including enabling Alexa to navigate the internet and execute tasks, as well as Amazon Nova Act, an AI model designed to perform actions within a web browser.

Beyond these examples, other startups and established tech companies are also developing AI “agents” in this country and overseas (including the invite-only release of Manus AI by Butterfly Effect, an AI developer in China). As a recent Microsoft piece speculates, the generative AI future may involve a “new ecosystem or marketplace of agents,” akin to the current smartphone app ecosystem.  Although early agentic AI device releases have received mixed reviews and seem to still have much unrealized potential, they demonstrate the capability of such devices to execute multistep actions in response to natural language instructions.

Like prior technological revolutions—personal computers in the 1980s, e-commerce in the 1990s and smartphones in the 2000s—the emergence of agentic AI technology challenges existing legal frameworks. Let’s take a look at some of those issues – starting with basic questions about contract law.

Note: This discussion addresses general legal issues with respect to hypothetical agentic AI devices or software tools/apps that have significant autonomy. The examples provided are illustrative and do not reflect any specific AI tool’s capabilities.

Automated Transactions and Electronic Agents

Electronic Signatures Statutory Law Overview

A foundational legal question is whether transactions initiated and executed by an AI tool on behalf of a user are enforceable.  Despite the newness of agentic AI, the legal underpinnings of electronic transactions are well-established. The Uniform Electronic Transactions Act (“UETA”), which has been adopted by every state and the District of Columbia (except New York, as noted below), the federal E-SIGN Act, and the Uniform Commercial Code (“UCC”), serve as the legal framework for the use of electronic signatures and records, ensuring their validity and enforceability in interstate commerce. The fundamental provisions of UETA are Sections 7(a)-(b), which provide: “(a) A record or signature may not be denied legal effect or enforceability solely because it is in electronic form; (b) A contract may not be denied legal effect or enforceability solely because an electronic record was used in its formation.” 

UETA is technology-neutral and “applies only to transactions between parties each of which has agreed to conduct transactions by electronic means” (allowing the parties to choose the technology they desire). In the typical e-commerce transaction, a human user selects products or services for purchase and proceeds to checkout, which culminates in the user clicking “I Agree” or “Purchase.”  This click—while not a “signature” in the traditional sense of the word—may be effective as an electronic signature, affirming the user’s agreement to the transaction and to any accompanying terms, assuming the requisite contractual principles of notice and assent have been met.

At the federal level, the E-SIGN Act (15 U.S.C. §§ 7001-7031) (“E-SIGN”) establishes the same basic tenets regarding electronic signatures in interstate commerce and contains a reverse preemption provision, generally allowing states that have passed UETA to have UETA take precedence over E-SIGN.  If a state does not adopt UETA but enacts another law regarding electronic signatures, its alternative law will preempt E-SIGN only if the alternative law specifies procedures or requirements consistent with E-SIGN, among other things.

However, while UETA has been adopted by 49 states and the District of Columbia, it has not been enacted in New York. Instead, New York has its own electronic signature law, the Electronic Signature Records Act (“ESRA”) (N.Y. State Tech. Law § 301 et seq.). ESRA generally provides that “An electronic record shall have the same force and effect as those records not produced by electronic means.” According to New York’s Office of Information Technology Services, which oversees ESRA, “the definition of ‘electronic signature’ in ESRA § 302(3) conforms to the definition found in the E-SIGN Act.” Thus, as one New York state appellate court stated, “E-SIGN’s requirement that an electronically memorialized and subscribed contract be given the same legal effect as a contract memorialized and subscribed on paper…is part of New York law, whether or not the transaction at issue is a matter ‘in or affecting interstate or foreign commerce.’”[2] 

Given US states’ wide adoption of UETA model statute, with minor variations, this post will principally rely on its provisions in analyzing certain contractual questions with respect to AI agents, particularly given that E-SIGN and UETA work toward similar aims in establishing the legal validity of electronic signatures and records and because E-SIGN expressly permits states to supersede the federal act by enacting UETA.  As for New York’s ESRA, courts have already noted that the New York legislature incorporated the substantive terms of E-SIGN into New York law, thus suggesting that ESRA is generally harmonious with the other laws’ purpose to ensure that electronic signatures and records have the same force and effect as traditional signatures.  

Electronic “Agents” under the Law

Beyond affirming the enforceability of electronic signatures and transactions where the parties have agreed to transact with one another electronically, Section 2(2) of UETA also contemplates “automated transactions,” defined as those “conducted or performed, in whole or in part, by electronic means or electronic records, in which the acts or records of one or both parties are not reviewed by an individual.” Central to such a transaction is an “electronic agent,” which Section 2(6) of UETA defines as “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.” Under UETA, in an automated transaction, a contract may be formed by the interaction of “electronic agents” of the parties or by an “electronic agent” and an individual. E-SIGN similarly contemplates “electronic agents,” and states: “A contract or other record relating to a transaction in or affecting interstate or foreign commerce may not be denied legal effect, validity, or enforceability solely because its formation, creation, or delivery involved the action of one or more electronic agents so long as the action of any such electronic agent is legally attributable to the person to be bound.”[3] Under both of these definitions, agentic AI tools—which are increasingly able to initiate actions and respond to records and performances on behalf of users—arguably qualify as “electronic agents” and thus can form enforceable contracts under existing law.[4]

AI Tools and E-Commerce Transactions

Given this existing body of statutory law enabling electronic signatures, from a practical perspective this may be the end of the analysis for most e-commerce transactions. If I tell an AI tool to buy me a certain product and it does so, then the product’s vendor, the tool’s provider and I might assume—with the support of UETA, E-SIGN, the UCC, and New York’s ESRA—that the vendor and I (via the tool) have formed a binding agreement for the sale and purchase of the good, and that will be the end of it unless a dispute arises about the good or the payment (e.g., the product is damaged or defective, or my credit card is declined), in which case the AI tool isn’t really relevant.

But what if the transaction does not go as planned for reasons related to the AI tool? Consider the following scenarios:

  • Misunderstood Prompts: The tool misinterprets a prompt that would be clear to a human but is confusing to its model (e.g., the user’s prompt states, “Buy two boxes of 101 Dalmatians Premium dog food,” and the AI tool orders 101 two-packs of dog food marketed for Dalmatians).
  • AI Hallucinations: The user asks for something the tool cannot provide or does not understand, triggering a hallucination in the model with unintended consequences (e.g., the user asks the model to buy stock in a company that is not public, so the model hallucinates a ticker symbol and buys stock in whatever real company that symbol corresponds to).
  • Violation of Limits: The tool exceeds a pre-determined budget or financial parameter set by the user (e.g., the user’s prompt states, “Buy a pair of running shoes under $100” and the AI tool purchases shoes from the UK for £250, exceeding the user’s limit).
  • Misinterpretation of User Preference: The tool misinterprets a prompt due to lack of context or misunderstanding of user preferences (e.g., the user’s prompt states, “Book a hotel room in New York City for my conference,” intending to stay near the event location in lower Manhattan, and the AI tool books a room in Queens because it prioritizes price over proximity without clarifying the user’s preference).

Disputes like these begin with a conflict between the user and a vendor—the AI tool may have been effective to create a contract between the user and the vendor, and the user may then have legal responsibility for that contract.  But the user may then seek indemnity or similar rights against the developer of the AI tool.

Of course, most developers will try to avoid these situations by requiring user approvals before purchases are finalized (i.e., “human in the loop”). But as desire for efficiency and speed increases (and AI tools become more autonomous and familiar with their users), these inbuilt protections could start to wither away, and users that grow accustomed to their tool might find themselves approving transactions without vetting them carefully. This could lead to scenarios like the above, where the user might seek to void a transaction or, if that fails, even try to avoid liability for it by seeking to shift his or her responsibility to the AI tool’s developer.[5] Could this ever work? Who is responsible for unintended liabilities related to transactions completed by an agentic AI tool?

Sources of Law Governing AI Transactions

AI Developer Terms of Service

As stated in UETA’s Prefatory Note, the purpose of UETA is “to remove barriers to electronic commerce by validating and effectuating electronic records and signatures.” Yet, the Note cautions, “It is NOT a general contracting statute – the substantive rules of contracts remain unaffected by UETA.”  E-SIGN contains a similar disclaimer in the statute, limiting its reach to statutes that require contracts or other records be written, signed, or in non-electronic form (15 U.S.C. §7001(b)(2)). In short, UETA, E-SIGN, and the similar UCC provisions do not provide contract law rules on how to form an agreement or the enforceability of the terms of any agreement that has been formed.

Thus, in the event of a dispute, terms of service governing agentic AI tools will likely be the primary source to which courts will look to assess how liability might be allocated. As we noted in Part I of this post, early-generation agentic AI hardware devices generally include terms that not only disclaim responsibility for the actions of their products or the accuracy of their outputs, but also seek indemnification against claims arising from their use. Thus, absent any express customer-favorable indemnities, warranties or other contractual provisions, users might generally bear the legal risk, barring specific legal doctrines or consumer protection laws prohibiting disclaimers or restrictions of certain claims.[6]

But what if the terms of service are nonexistent, don’t cover the scenario, or—more likely—are unenforceable? Unenforceable terms for online products and services are not uncommon, for reasons ranging from “browsewrap” being too hidden, to specific provisions being unconscionable. What legal doctrines would control during such a scenario?

The Backstop: User Liability under UETA and E-SIGN

Where would the parties stand without the developer’s terms? E-SIGN allows for the effectiveness of actions by “electronic agents” “so long as the action of any such electronic agent is legally attributable to the person to be bound.” This provision seems to bring the issue back to the terms of service governing a transaction or general principles of contract law. But again, what if the terms of service are nonexistent or don’t cover a particular scenario, such as those listed above. As it did with the threshold question of whether AI tools could form contracts in the first place, UETA appears to offer a position here that could be an attractive starting place for a court. Moreover, in the absence of express language under New York’s ESRA, a New York court might apply E-SIGN (which contains an “electronic agent” provision) or else find insight as well by looking at UETA and its commentary and body of precedent if the court isn’t able to find on-point binding authority, which wouldn’t be a surprise, considering that we are talking about technology-driven scenarios that haven’t been possible until very recently.

UETA generally attributes responsibility to users of “electronic agents”, with the prefatory note explicitly stating that the actions of electronic agents “programmed and used by people will bind the user of the machine.” Section 14 of UETA (titled “Automated Transaction”) reinforces this principle, noting that a contract can be formed through the interaction of “electronic agents” “even if no individual was aware of or reviewed the electronic agents’ actions or the resulting terms and agreements.” Accordingly, when automated tools such as agentic AI systems facilitate transactions between parties who knowingly consent to conduct business electronically, UETA seems to suggest that responsibility defaults to the users—the persons who most immediately directed or initiated their AI tool’s actions. This reasoning treats the AI as a user’s tool, consistent with the other UETA Comments (e.g., “contracts can be formed by machines functioning as electronic agents for parties to a transaction”).

However, different facts or technologies could lead to alternative interpretations, and ambiguities remain. For example, Comment 1 to UETA Section 14 asserts that the lack of human intent at the time of contract formation does not negate enforceability in contracts “formed by machines functioning as electronic agents for parties to a transaction” and that “when machines are involved, the requisite intention flows from the programming and use of the machine” (emphasis added).

This explanatory text has a couple of issues. First, it is unclear about what constitutes “programming” and seems to presume that the human intention at the programming step (whatever that may be) is more-or-less the same as the human intention at the use step[7], but this may not always be the case with AI tools. For example, it is conceivable that an AI tool could be programmed by its developer to put the developer’s interests above the users’, for example by making purchases from a particular preferred e-commerce partner even if that vendor’s offerings are not the best value for the end user. This concept may not be so far-fetched, as existing GenAI developers have entered into content licensing deals with online publishers to obtain the right for their chatbots to generate outputs or feature licensed content, with links to such sources. Of course, there is a difference between a chatbot offering links to relevant licensed news sources that are accurate (but not displaying appropriate content from other publishers) versus an agentic chatbot entering into unintended transactions or spending the user’s funds in unwanted ways. This discrepancy in intention alignment might not be enough to allow the user to shift liability for a transaction from a user to a programmer, but it is not hard to see how larger misalignments might lead to thornier questions, particularly in the event of litigation when a court might scrutinize the enforceability of an AI vendor’s terms (under the unconscionability doctrine, for example). 

Second, UETA does not contemplate the possibility that the AI tool might have enough autonomy and capability that some of its actions might be properly characterized as the result of its own intent. Looking at UETA’s definition of “electronic agent,” the commentary notes that “As a general rule, the employer of a tool is responsible for the results obtained by the use of that tool since the tool has no independent volition of its own.” But as we know, technology has advanced in the last few decades and depending on the tool, an autonomous AI tool might one day have much independent volition (and further UETA commentary admits the possibility of a future with more autonomous electronic agents). Indeed, modern AI researchers have been contemplating this possibility even before rapid technological progress began with ChatGPT.

Still, Section 10 of UETA may be relevant to some of the scenarios from our bulleted selection of AI tool mishaps listed above, including misunderstood prompts or AI hallucinations. UETA Section 10 (titled “Effect of Change or Error”) outlines the possible actions a party may take when discovering human or machine errors or when “a change or error in an electronic record occurs in a transmission between parties to a transaction.” The remedies outlined in UETA depend on the circumstances of the transaction and whether the parties have agreed to certain security procedures to catch errors (e.g., a “human in the loop” confirming an AI-completed transaction) or whether the transaction involves an individual and a machine.[8]  In this way, the guardrails integrated into a particular AI tool or by the parties themselves play a role in the liability calculus. The section concludes by stating that if none of UETA’s error provisions apply, then applicable law governs, which might include the terms of the parties’ contract and the law of mistake, unconscionability and good faith and fair dealing.

* * *

Thus, along an uncertain path we circle back to where we started: the terms of the transaction and general contract law principles and protections. However, not all roads lead to contract law. In our next installment in this series, we will explore the next logical source of potential guidance on AI tool liability questions: agency law.  Decades of established law may now be challenged by a new sort of “agent” in the form of agentic AI…and a new AI-related lawsuit foreshadows the issues to come.


[1] In keeping with common practice in the artificial intelligence industry, this article refers to AI tools that are capable of taking actions on behalf of users as “agents” (in contrast to more traditional AI tools that can produce content but not take actions). However, note that the use of this term is not intended to imply that these tools are “agents” under agency law.

[2] In addition, the UCC has provisions consistent with UETA and E-SIGN providing for the use of electronic records and electronic signatures for transactions subject to the UCC. The UCC does not require the agreement of the parties to use electronic records and electronic signatures, as UETA and E-SIGN do.

[3] Under E-SIGN, “electronic agent” means “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part without review or action by an individual at the time of the action or response.”

[4] It should be noted that New York’s ESRA does not expressly provide for the use of “electronic agents,” yet does not prohibit them either.  Reading through ESRA and the ESRA regulation, the spirit of the law could be construed as forward-looking and seems to suggest that it supports the use of automated systems and electronic means to create legally binding agreements between willing parties. Looking to New York precedent, one could also argue that E-SIGN, which contains provisions about the use of “electronic agents”, might also be applicable in certain circumstances to fill the “electronic agent” gap in ESRA. For example, the ESRA regulations (9 CRR-NY § 540.1) state: “New technologies are frequently being introduced. The intent of this Part is to be flexible enough to embrace future technologies that comply with ESRA and all other applicable statutes and regulations.”  On the other side, one could argue that certain issues surrounding “electronic agents” are perhaps more unsettled in New York.  Still, New York courts have found ESRA consistent with E-SIGN.  

[5] Since AI tools are not legal persons, they could not be liable themselves (unlike, for example, a rogue human agent could be in some situations). We will explore agency law questions in Part III.

[6] Once agentic AI technology matures, it is possible that certain user-friendly contractual standards might emerge as market participants compete in the space. For example, as we wrote about in a prior post, in 2023 major GenAI providers rolled out indemnifications to protect their users from third-party claims of intellectual property infringement arising from GenAI outputs, subject to certain carve-outs.

[7] The electronic “agents” in place at the time of UETA’s passage might have included basic e-commerce tools or EDI (Electronic Data Interchange), which is used by businesses to exchange standardized documents, such as purchase orders, electronically between trading partners, replacing traditional methods like paper, fax, mail or telephone. Electronic tools are generally designed to explicitly perform according to the user’s intentions (e.g., clicking on an icon will add this item to a website shopping cart or send this invoice to the customer) and UETA, Section 10, contains provisions governing when an inadvertent or electronic error occurs (as opposed to an abrogation of the user’s wishes).

[8] For example, UETA Section 10 states that if a change or error occurs in an electronic record during transmission between parties to a transaction, the party who followed an agreed-upon security procedure to detect such changes can avoid the effect of the error, if the other party who didn’t follow the procedure would have detected the change had they complied with the security measure; this essentially places responsibility on the party who failed to use the agreed-upon security protocol to verify the electronic record’s integrity.

Comments to UETA Section 10 further explain the context of this section: “The section covers both changes and errors. For example, if Buyer sends a message to Seller ordering 100 widgets, but Buyer’s information processing system changes the order to 1000 widgets, a “change” has occurred between what Buyer transmitted and what Seller received. If on the other hand, Buyer typed in 1000 intending to order only 100, but sent the message before noting the mistake, an error would have occurred which would also be covered by this section.”  In the situation where a human makes a mistake when dealing with an electronic agent, the commentary explains that “when an individual makes an error while dealing with the electronic agent of the other party, it may not be possible to correct the error before the other party has shipped or taken other action in reliance on the erroneous record.”



Source link