Product Liability Claims Against AI Developers: An Uncertain Future

Legal Insights

On September 29, 2025, Senators Dick Durbin and Josh Hawley introduced a bipartisan bill that would create a federal cause of action in product liability against developers and deployers of artificial intelligence systems.

Touted as a measure to protect children and all consumers, Senate Bill 2937 is dubbed the “AI LEAD Act,” an acronym for Aligning Incentives for Leadership, Excellence and Advancement in Development.” In the face of differing legislation introduced in all 50 states during 2025, including the enactment of about 100 measures in 38 states, the bill is designed to provide a safety net of minimal protection. Thus, if enacted, it will “supersede State law only where State law conflicts with” its provisions. (Sec. 304(a)). It would not preempt state law in its entirety. Instead, it allows states to enact or enforce protections that are stronger than those provided by the Act, so long as those protections “align with the principles of harm prevention, accountability, and transparency” reflected in the Act.

The Congressional findings set forth in the Act suggest that a Federal product liability framework will strike a balance, removing barriers to interstate commerce, while protecting the due proceed rights of individuals.

Key Provisions on Developer Liability

Based in part on the American Legislative Exchange Council’s Model Products Liability Act, the AI LEAD Act establishes civil liability for AI system developers. That liability includes: claims for design defects, failure to warn, breach of express warranty, and unreasonably dangerous defects present at the time of deployment. It substitutes the term “development or process used to produce” where “manufacturing” appears in traditional products liability law.

In addition to liability based on the availability of a reasonable alternative design, the Act provides that a claim may be asserted on the basis that an AI system’s design was “manifestly unreasonable”, a term that is not expressly defined. It further permits an inference to be drawn that the plaintiff’s harm was caused by a design defect without proof of a specific defect if the incident leading to harm “was of a kind that ordinarily occurs as a result of a defect” and was not “solely the result of other causes.”

The failure-to-warn provisions of the Act align closely with traditional product liability law. An adequate warning is defined as one that a reasonably prudent person in similar circumstances would have provided regarding foreseeable risks and that communicates sufficient information for ordinary users to understand those risks. An exception to any warning requirement for “open and obvious” risks is included, but risks are “presumed not to be open and obvious to a user” under age 18.  Any AI products that does not comply with a product safety statute or regulation is “deemed defective due to inadequate instructions or warnings” with respect to risks addressed by the statute or regulation.

As to claims based on unreasonably dangerous or defective products, the Act creates strict liability notwithstanding the developer’s exercise of “all possible care” or lack of privity with the user. The only exception to such strict liability is for harm caused by a substantial modification to the AI system.

Liability of Deployers

The liability of deployers of AI systems is addressed in a separate section of the Act. A deployer is defined as a person who uses or operates an AI product for their own personal or commercial use.

A deployer may be held liable as a developer if they substantially modify an AI system or intentionally misuse it contrary to its intended use. If a developer has not specified an intended use, one may be inferred based on the manner of distribution and the target market.

In instances where a developer is beyond the court’s jurisdiction or is insolvent, a deployer may be required to stand in for the developer. They may also be held liable to the same extent as the developer would have been. The deployer may pursue indemnification from the developer for the portion of damages, fees and costs attributable to the developer’s fault.

Liability Limitations Prohibited

The AI LEAD Act prohibits developers from imposing contract provisions on deployers that waive any rights, proscribe any forum or procedure, or unreasonably limit liability under either the Act or State law. Such clauses are declared unenforceable. Likewise, any such clauses applicable to users of AI products are prohibited.

Enforcement

In addition to the private right of action for individuals, civil actions under the Act may be brought by the Attorney General of the United States and by State attorneys general.  Relief may include not only damages, restitution, or other compensation, but also injunctions, civil penalties, and reasonable attorneys’ fees.

Statute of Limitations

The Act reflects a four year statute of limitations. The statute would run from the date on which the claimant discovered or should have discovered the harm and its cause. The period is tolled during any period of disability.

The December 2025 Wrinkle

The AI LEAD Act was referred to the Senate Committee on the Judiciary in late September. On December 11, however, President Trump issued an Executive Order entitled “Ensuring a National Policy Framework for Artificial Intelligence.” The EO decrees that, “[i]t is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”

Pursuant to the EO, the Attorney General was directed to establish an AI Litigation Task Force within one month to challenge state AI laws deemed inconsistent with federal policy. The Order also directs the Secretary of Commerce to publish an evaluation of state AI laws within three months.

While the EO states that it is not intended to preempt State laws relating to child safety protections, it does not address liability for harm caused by AI System products under either the AI LEAD Act or state product liability laws. It is presently unclear whether the AI product liability regime reflected in the AI LEAD Act will be supported by the administration or deemed contrary to the policy of minimizing burdens on AI Developers.

The text of the AI LEAD Act can be found HERE. That of the December 11, 2025 Executive Order can be found HERE.