AI is having its agent moment. Across industries, people are no longer talking about being able to summarise documents or draft text. Now it’s all about “agents”: a term that’s used a lot, but really means an application designed to act with some level of autonomy. Instead of being told exactly what to do, they decide which sources to consult, which steps to take, and sometimes even what outcome to pursue. (Thomson Reuters have some useful definitions here.)
Despite what you might read on certain vendors’ websites, there aren't true agents (yet). Current systems marketed as such are still dependent on oversight. It would be reckless to trust them with serious legal work. But progress is fast, and more capable agents are coming.
At Wexler, the question we’re asking is: are agents useful for litigation? Imagine an AI tool that goes beyond reviewing documents to designing litigation strategy and helping run a case through to trial. For law firms wanting to control costs and improve efficiency, that might sound appealing - or at least it’s what legal AI companies think sounds appealing. But before chasing that vision, it’s important to ask whether agents can meet the demands of contentious work.
Why autonomy and accountability clash in litigation
An open-ended agent works with minimal constraints. It can break down a task, choose its own path, and adapt as it goes. In coding, that flexibility is great. Coding agents can generate, test, and re-run snippets of code, gradually improving their outputs with limited risk. If a function fails, it is recompiled. The stakes are low.
Litigation is different. It isn’t a clean optimisation problem but a contested process, shaped by conflicting evidence, live witnesses, and procedural rules that can shift a case’s trajectory. When mapping contradictions between a witness statement and the record, a single inconsistency can shape cross-examination. Missing that inconsistency, even once, can change the outcome.
That’s why open-ended agents aren’t the right fit here. If one invents a step, misinterprets a fact, or strays into error, accountability still lies with the lawyer of record. The agent doesn’t appear in court. You do.
We think that will stay the same, no matter how good agents become. The role of legal AI is not to replace lawyers but to surface facts, find connections in the record, and sort through complex materials in a way that enhances, rather than substitutes, their judgement.
Litigation also demands auditability. Lawyers have to show how conclusions were reached, what sources were relied upon, and why each step was taken. A black box can’t be defended to a judge or explained to a client.
Structured workflows, human oversight
So, what works instead? We think a better model is a structured workflow with human oversight. In this setup, AI accelerates the heavy lifting but within clear parameters. It extracts facts, reviews documents, and assembles chronologies. But, crucially, each step is open to inspection. Every fact can be traced back to its original place in the record.
This matters because AI excels at scale and pattern recognition, but still can’t weigh credibility, interpret motive, or build strategy. Those remain human responsibilities. The lawyer decides whether a contradiction is fatal or immaterial, whether a timeline supports a theory or undermines it.
Specialisation over generalisation
General-purpose legal AI platforms help with a wide range of tasks across practice areas, from reviewing contracts to generating compliance reports and assisting with legal research. This breadth sometimes comes at the expense of depth.
Litigation rewards depth. It is an area where accuracy, auditability, and context matter more than versatility. We think that’s why specialised systems are gaining ground. Surveys show most lawyers prefer AI built for specific tasks such as due diligence, disclosure, and research, because these tools align with established workflows and leave audit trails that withstand scrutiny. In disputes, the test isn’t how much a system can attempt, but how reliably it supports the case.
This principle underpins Wexler. We focus solely on litigation. Our fact intelligence platform processes large volumes of documents, extracts and links facts, builds chronologies, and identifies contradictions. Our assistant, KiM, can draft applications, generate reports, and surface red flags. It operates only from the documents provided, and always with citations.
Bounded agents and the case for controlled autonomy
Is there a contradiction here? Wexler incorporates agentic elements, but we think open-ended agents are the wrong framework for litigation. How do these positions fit together?
The distinction is between open-ended autonomy and bounded autonomy. Open-ended agents set their own course, which doesn’t work in litigation. The term “bounded agents” comes from computer science, where it describes agents that operate within defined limits. In legal AI, it usefully captures assistants that handle narrow, auditable tasks under human oversight.
KiM operates within a controlled workflow and has limited autonomy. It may choose which documents to surface first, which tool to apply, or which database to check, but only inside a predefined structure. Every action is traceable, every output verifiable.
Another difference is in proactivity. KiM can flag red flags, suggest follow-ups, or propose timelines. But it never claims authority over what those findings mean. It highlights issues and the lawyer decides which matter.
In other words, Wexler accepts agentic elements for what they are useful for (bounded, verifiable tasks) while rejecting the idea of agents as freewheeling strategists in litigation.
Managing risk
We already know the risks of over-reliance on unstructured AI. Courts in the US famously sanctioned lawyers for submitting AI-generated filings that cited cases that did not exist. Regulators are scrutinising how firms use AI. Clients now ask not only for efficiency but for transparency in how AI outputs are produced.
In this environment, black-box agents are liabilities. If a conclusion can’t be explained, it can’t be defended.
Wexler was designed for verification. Every fact is traceable down to the sentence. In live hearings, it can check testimony against the documentary record in real time. The lawyer then decides how to use that information.
The limits of autonomy
It’s tempting for us on this side of the legal AI table to imagine a fully autonomous litigation agent, but this isn’t realistic. Machines can process volume, find anomalies, and suggest connections. They can’t take responsibility for strategy. Litigation is about persuasion, credibility, and judgment. Those are human functions.
The right role for AI is to support strategy, not dictate it. It can build foundations and highlight issues, but it cannot decide what matters most. That remains with the lawyer, who is accountable to their client and the court.
Looking ahead
The legal AI agent landscape will keep evolving, and improving. Orchestrator platforms may emerge to coordinate specialised tools, and limited autonomy may prove useful in structured areas such as e-discovery. Looking further ahead, lawyers may work with multiple intelligent agents. To avoid open-ended chaos, these will need to take the form of narrow assistants, each focused on a defined task, each operating within a framework of verification, each auditable.
The contradiction between rejecting agents and using agentic elements is resolved by being precise about definitions. Open-ended agents are not viable in disputes. Bounded agents, designed for specific tasks and always open to verification, are.
If you'd like to see how Wexler is thinking about agents, workflows and AI in general, you can book a demo here: https://calendly.com/wexler-ai/30min