Hallucinations shouldn’t stop lawyers using AI
Hallucinations are real. Anyone who’s spent time with generative AI has seen answers that sound confident and turn out to be invented, incomplete, or loosely connected to the source material.
That’s a problem, especially when AI is used for legal work, but it doesn’t mean AI isn’t worth using. The productivity gains are too large to ignore. Walking away from AI because hallucinations exist would be like refusing to use Google because not every result is relevant.
What hallucinations do change is the bar for the AI used in legal contexts.
In litigation, the question is never just whether something sounds right. It’s what supports it, what contradicts it, and whether anything important has been missed. The risk is highest where answers get reused: witness preparation, deposition summaries, internal case updates, and anything that feeds directly into what gets said in court.
An answer that attributes a key email to the wrong executive, or places a document before a board meeting when it was created weeks later, does more harm than no answer at all. The same applies to testimony. When a witness’s qualified answer is restated as definitive, or a limitation drops out of a summary, the evidentiary picture changes.
General-purpose AI should make lawyers uneasy. It’s designed to produce the most likely answer, not to preserve the structure of an evidentiary record.
When the signal is weak or incomplete, the language carries on anyway. What comes out can sound coherent while having nothing to do with what the evidence actually supports. Damien Charlotin, a legal researcher in Paris, has compiled hundreds of cases globally involving hallucinated content.
So, in a world where hallucinations are far from solved, how do you get value from AI in litigation? Not by using tools that force every answer to be treated as suspect. You get value by using tools that stay anchored to the record, rather than optimising for probability.
How Wexler avoids the probability trap
It’s one of those claims you’re not meant to make, but: Wexler isn’t liable to hallucination, or in this case, fabrication, in the same way many other legal AI tools are.
Rather than asking a model to paraphrase documents and hoping the output stays faithful, Wexler restructures the material first. Documents are parsed into a structured factual layer of events, entities, relationships, timelines, and contradictions. Every extracted fact is tied to its exact source sentence, and only then does analysis begin. If a sentence can’t be traced to a specific line in the documents, it can’t appear in the output.
Verification becomes a quick check against a visible source sentence, not an investigation into invented language. With many tools, citations help after an answer is generated. Wexler prevents unsupported factual assertions from being generated in the first place.
Wexler hasn’t been designed to perform every legal task. It doesn’t speculate across case law. It focuses on factual work that can be verified with precision at scale. Accuracy comes first, and everything else follows from that.
Lawyers should always verify sources, and that obligation doesn’t disappear with AI. But there’s a difference between confirming a grounded answer and diagnosing a fabricated one.
Generative AI has countless exciting use cases beyond litigation. When used for litigation, however, the efficiency case collapses if every output must be treated as suspect. But if every output is linked to a fact in the record, hallucinations stop being a reason to avoid AI and become a reason to use it properly.
