How Legal Teams Can Build Defensible Validation Workflows for GenAI, TAR, and Linear Document Review
Much of the current discussion about eDiscovery and document review centers on choosing tools. Whether it’s which AI-enhanced legaltech solution to use, which type of AI provides the most value (GenAI, agentic AI, RAG, etc.), or even if AI is safe enough to use at all, opinions vary widely, and there is no consensus.
Tools may dominate the discussion for good reason—the CEO of Anthropic saying AI could eliminate 50% of entry-level white-collar jobs, for example. However, especially in eDiscovery and document review, this conversation focuses on solving the wrong problem.
A defensible validation process matters more than whether you use GenAI, technology-assisted review (TAR), or linear review to identify responsive documents. In our recent webinar with EDRM, “We Mean Business: Turning Validation Metrics into Defensible Decisions,” Matt Mahon, Vice President of Client Solutions at Level Legal, and Stephanie Clerkin, Director of Litigation Support at Korein Tillery, explained why this is the case and how legal teams can ensure that they head into discovery with rock-solid validation strategies.
Here are three reasons that the validation strategy matters more than the mechanics of the review itself.
GenAI Is Also Technology-Assisted Review
The truth is, AI has been a standard in eDiscovery for more than a decade. What many refer to as TAR 1.0, which was solidified as a review practice by the publication of Da Silva Moore in 2012, uses machine learning to increase review efficiency. Machine learning is a type of artificial intelligence. Notably, it differs from GenAI, but both fall under the umbrella of artificial intelligence.
As TAR evolved, eDiscovery professionals eventually moved to continuous active learning (CAL, sometimes called TAR 2.0). CAL used the same underlying technology as TAR 1.0 but made the process even more efficient by learning as humans reviewed documents, rather than large static samples.
GenAI is the next evolution of technology-assisted review. In many ways, it is simply the latest version of a workflow category that legal teams already know how to validate. Thinking of it as part of this history of new technologies enhancing our ability to review documents and provide better service to our clients helps show that the important question about GenAI in review is not, “should we use it?” It’s “how do we use it defensibly?”
And the answer to that question is to treat it just as you would TAR 1.0 or TAR 2.0. One oft-cited example of what this looks like in the real world is a February 2025 filing in EEOC v. Tesla. In this filing, the parties proposed a discovery protocol that explicitly mentioned the use of GenAI in responsiveness review. The use of GenAI was predicated on a good-faith attempt by both parties to use “a statistically sound methodology to determine the recall rate and other measures of the effectiveness of the tool.”
This protocol was ultimately approved by the court and shows how the use of GenAI in review can be validated under the same Da Silva Moore standard established for TAR almost 15 years ago. This emphasizes the importance of validation over the specific GenAI tool used: If you can document your prompts, iterations, and design decisions in a way that meets that standard, you can create a defensible workflow for any tool.
Human Review Is Still Risky
A common view is that “traditional” reviews are safe and AI reviews are risky. Many eDiscovery and review professionals will spot the issue with this thought process right away: Human review also introduces inconsistency, judgment variance, and quality risks. It requires validation and quality control just like any technology-assisted workflow.
The “human versus machine” debate often misses the real issue: controlling risk through defensible process design. One concern about GenAI tools is that they are non-deterministic. This means you can give the tool the same prompt twice and get different answers each time. However, we’ve already solved this problem within linear review.
There will always be variance in human review, too. No one is perfect, and that variance presents issues similar to those faced by a GenAI tool that produces different outputs for the same prompts. If you’re working with a particularly AI-averse opposing party, one way to address this is to negotiate a more conservative statistical methodology to validate GenAI tools. For example, taking a larger sample from the population of AI-reviewed documents and/or agreeing to a stricter confidence level and margin of error.
Planning Early Sets You Up for Success—GenAI or Not
Defensible review workflows rarely come together at the last minute. Everyone knows that the better prepared you are for a negotiation, deposition, or trial, the more likely you are to succeed. This is, of course, also true for meet-and-confers and other discussions above discovery protocols.
Legal teams that understand the data, likely discovery challenges, and review risks early can design stronger validation protocols before disputes emerge. So, what should you be thinking about? Use our checklist below to make sure you come to the discovery protocol questions with confidence and clarity.
eDiscovery Protocol Pre-Process Checklist
- Platform Alignment: Agree on the review platform and any GenAI/TAR tools to be used
- Define the Scope: Discuss custodians, date ranges, data sources, and file types
- Discuss Relevance: Calculate the estimated richness of the document population
- Agree on Error Tolerance: Determine the confidence level and margin of error that makes the most sense for the specifics of your case
- Avoid Ambiguity: Define what constitutes a “responsive” document for the matter
- Confirm Methodology: Where required, ensure agreement with opposing counsel and the court on validation methodology
While there is a lot of nuance in the checklist above, ensuring you address each item will put you in a strong position for eDiscovery protocol discussions. If you’d like to learn more about the terms discussed above and how some of that nuance plays out in real-world scenarios, download the materials from our webinar, “We Mean Business: Turning Validation Metrics into Defensible Decisions.” You’ll find definitions and equations for key statistical concepts, as well as case studies that show how they play out in practice.
The core takeaway is straightforward: A defensible validation strategy gives legal teams more control over review quality, cost, and risk regardless of the technology involved. It’s something that can be easily overlooked but will increase your confidence in the mechanics of review across the board, whether you’re using linear review, TAR, GenAI, or technologies yet to be invented.
Questions about building a defensible validation strategy for your next matter? Contact our Vice President of Client Solutions, Matt Mahon.

