Are AI Disputes a New Type of Dispute — and Do They Require New Forms of Resolution?
- Rita Sheth
- Jun 18
- 5 min read
Artificial intelligence is no longer an experimental technology confined to research labs. AI systems are now shaping decisions about credit, insurance, recruitment, healthcare, transport, and much more. They generate content, automate tasks, and increasingly interact with the world in ways that carry serious legal and ethical implications.
As a result, we may start seeing the first wave of litigation and regulatory action around AI — from copyright disputes over training data, to product liability claims arising from AI-driven systems, to challenges about bias and discrimination in automated decision-making.
But as these cases emerge, an important question arises: are AI disputes fundamentally a new type of dispute, or are they simply a new factual context for applying existing legal tools? And do we need to evolve our dispute resolution processes to deal with them?
The answer, in my view, is both yes and no. Many AI disputes will continue to be channelled through familiar legal categories — contract, tort, IP, regulatory enforcement. But beneath that surface, AI is introducing novel challenges that will stretch traditional processes and may require new approaches, hybrid mechanisms, and fresh thinking about how we resolve disputes in the AI age.
Here are some of the key reasons why.
Causation in the Age of the Black Box
One of the most profound challenges AI introduces is the opacity of causation.
Many modern AI systems, especially those based on deep learning, operate as “black boxes”. We can observe their inputs and outputs, but we often cannot fully explain why a given outcome was produced. If an AI-driven loan approval system denies an applicant, or an autonomous vehicle makes a sudden steering decision, traditional legal tests of causation and foreseeability may be hard to apply.
This poses major difficulties for both claimants and courts. Proving that a specific harm was caused by the AI system, rather than by user error, data flaws, or stochastic behaviour, can be enormously complex.
Addressing this may require new procedural tools:
Greater reliance on probabilistic or statistical evidence.
Increased use of independent technical experts to interpret AI system behaviour.
Potential development of new doctrines around algorithmic accountability, where developers or deployers may bear responsibility even if precise causal chains are unknowable.
The Problem of Evolving Systems
Unlike traditional software, many AI systems are not static. They evolve over time, whether through continuous learning, updates to training data, or changes in their operational environment.
This creates significant challenges for evidence preservation. By the time a dispute reaches court — often a year or more after the relevant events — the AI model in question may no longer exist in its original form.
Addressing this may require:
Real-time capture of model versions and training data.
New requirements for audit trails and version control in AI system development.
Possibly the creation of technical escrow mechanisms to preserve evidence for future litigation.
Without these innovations, parties may find themselves litigating about a moving target, with no clear ability to reconstruct what the AI system actually did at the time of the alleged harm.
Distributed Responsibility and Fragmented Supply Chains
AI systems are rarely developed and deployed by a single actor. Instead, they are the product of distributed supply chains:
One party develops the model architecture.
Another trains it on third-party data.
A different party integrates it into a broader software or hardware system.
Yet another entity deploys it to end users.
In such an ecosystem, assigning legal responsibility is far from straightforward. If harm arises, who is liable — the model developer, the data provider, the system integrator, the end user?
This complexity may demand:
More sophisticated contractual frameworks to allocate risk across the supply chain.
New approaches to apportionment of liability in multi-party disputes.
Potential evolution of strict liability doctrines for certain types of high-risk AI applications.
The Need for Cross-Disciplinary Expertise
AI disputes often require an understanding of not just law, but software engineering, data science, statistics, cybersecurity, and ethics. Very few traditional litigators or arbitrators possess fluency across all these domains.
This suggests we may need to evolve our dispute resolution processes to bring in appropriate expertise:
Greater use of hybrid panels in arbitration, combining legal and technical expertise.
Creation of specialist court lists for AI-related disputes.
Procedural rules encouraging early appointment of joint independent experts to guide the tribunal.
Without such innovations, there is a risk that complex technical disputes will be resolved by decision-makers who lack the tools to fully understand them.
Novel Types of Harm
AI also gives rise to types of harm that do not fit neatly into traditional legal categories. For example:
Algorithmic bias and discrimination can cause systemic harm to groups, rather than individualised injury.
AI-generated content (deepfakes, synthetic media) can inflict reputational and societal harm on a large scale.
Autonomous AI systems can generate novel tort scenarios, where harm arises from complex human-machine interactions.
Existing legal doctrines may not map cleanly onto these new harms. Courts and legislators may need to develop new liability frameworks, and dispute resolution processes may need to adapt to handle collective redress or public interest litigation around AI harms.
The Role of Regulation
The regulatory landscape for AI is also evolving rapidly. The EU’s AI Act, for example, will create a new layer of compliance obligations and enforcement mechanisms.
This suggests that some AI disputes may be better resolved through hybrid processes combining regulatory enforcement with traditional civil or commercial litigation.
We may also see the rise of sector-specific AI ombudsman schemes or alternative dispute resolution (ADR) mechanisms tailored to AI harms.
In short, the future of AI dispute resolution is likely to be pluralistic — blending litigation, arbitration, regulatory processes, and ADR, depending on the nature of the dispute.
Conclusion: Are AI Disputes A New Type Of Dispute?
So — are AI disputes truly a new type of dispute?
In one sense, no. Many will still be framed as familiar causes of action under contract, tort, IP law, or regulatory regimes. But in another sense, yes: the underlying characteristics of AI disputes — their opacity, evolving nature, distributed responsibility, cross-disciplinary complexity, and novel harms — create challenges that will stretch existing processes and require us to adapt.
We are already seeing the legal profession respond. Some arbitration institutions are exploring specialist AI arbitration rules. Courts are considering the need for technical lists and enhanced expert processes. Law firms are building multi-disciplinary AI disputes teams combining litigators, technologists, and regulatory experts.
But much remains to be done. If we are to ensure that AI-related harms are addressed effectively — and that innovation is not stifled by legal uncertainty — we will need to continue evolving our dispute resolution toolkit.
AI disputes may not be a completely new species of litigation. But they are certainly a new genus — one that will demand new hybrid approaches, new expertise, and new thinking from all of us involved in the dispute resolution community.
Comments