Home > Knowledge > Blog

Unmasking Synthetic Media and Deepfakes: ILTACON Preview

Aug 5, 2025

Digital Forensic Insights from Level Legal’s David Greetham

 

AI’s media-generating capabilities continue to advance rapidly, both in terms of the quantity produced and the quality of the final product. This presents some particularly vexing issues for those of us in digital forensics and eDiscovery. Next week, I am heading to the Washington, D.C., area to discuss these issues at the International Legal Technology Association’s ILTACON 2025.

The session I’m speaking on, “Unmasking Synthetic Media and Deepfakes,” begins with an overview of how AI models create synthetic media. For example, image generation tools like DALL-E or Midjourney generally rely on diffusion models that convert random noise into recognizable images. To generate a new image, the AI predicts the most likely value for each pixel based on the millions (or more) of images and related text prompts it is trained on. The technology works similarly for AI-generated videos, but in 3D space rather than 2D.

As you’ve probably noticed, these models can produce convincing images and videos. However, they can include glaring errors or aberrations. They are notoriously bad at rendering text, for example, as well as low-level details such as fingers or the leaves of a tree. The model can’t verify its result against an external source of truth because it’s creating a wholly new image that has never existed before. Instead, it offers its best guess at what the image should look like, leading to these errors.

That being said, AI models improve every day and can now produce images and videos that are nearly indistinguishable from reality. Take a look at the “Will Smith Eating Spaghetti test” as an example — an AI-generated video of the actor enjoying some pasta went viral in 2023 for its bizarrely bad rendering. Just two years later, in 2025, the same prompt in Google’s Veo 3 AI tool provides almost documentary-like footage.

For digital forensics and eDiscovery practitioners, this creates a slew of obvious problems. How can you prove that a particular image, video, or even document is not AI-generated? For years, we have had AI detectors, but they have never been perfect, and as synthetic media gets closer to authentic media, that technology will need to improve to remain useful.

So what can we do? Well, that’s what we’ll be speaking about at ILTACON next week. Without giving too much of the presentation away, I can say that there is no silver bullet for determining whether something is AI-generated. We have some techniques and advice to share for forensics and eDiscovery professionals, but the best practice is to stay abreast of AI technology and its evolution to be aware of the minefield that is synthetic media and deepfakes.

If you’ll be at ILTACON in National Harbor, MD, next week, please reach out so we can connect. And if you won’t be, but you’re interested in this conversation, please let me know.

David Greetham is a tenured digital forensics expert and is the vice president of digital forensics at Level Legal. He has testified as an expert on numerous occasions both nationally and internationally. He has acted as a joint neutral expert on many occasions, specifically in the areas of digital forensics analysis, information governance and eDiscovery methodologies.

David has teaching and consulting experience at the White House, Harvard University, and New Scotland Yard. He serves as an executive advisor to EDRM and has been featured in CIO Magazine.

Explore More
Close Modal

Our Framework

Understand.

During this phase, we work to step away from any assumptions and guesses about what our customers needs, and let our research findings inform our decision-making. We learn more about our customers, their problems, wants, and needs, and the environment or context in which they will use the solution we offer.

Our Framework

Define.

During the Define phase, we analyze our research findings from the Understand phase and determine what is the most important problem to solve — and why. This step defines the goal. Then we can give a clear problem statement, describing what our customers’ needs are that we are trying to solve, making sure that we heard and defined their problem correctly.

Our Framework

Solve.

This phase is an important part of the discipline in our process. People often settle for the first solution, but the most obvious solution is often not the right one. During the Solve phase, we brainstorm collaboratively with multiple stakeholders to generate many unique solutions. We then analyze our potential solutions and make choices about which are the best to pursue based on learnings in the Understand phase.

Our Framework

Build & Test.

This phase is critical in developing the right solution to our customers’ problem. An organized approach to testing can help avoid rework and create exceptional outcomes. Starting small and testing the solution, we iterate quickly, before deploying solutions across the entire project.

Our Framework

Act.

During this phase, the hard work of prior phases comes to life in our customers’ best solution. The research, collaboration, and testing performed prior to project kick-off ensure optimal results.

Our Framework

Feedback.

At the project completion, we convene all stakeholders to discuss what went well, what could have been better, and how we might improve going forward. We call these meetings “Retrospectives,” and we perform them internally as a project team, and with our external customers. The Retrospective is one of the most powerful, meaningful tools in our framework.

Next