
Digital Forensic Insights from Level Legal’s David Greetham
AI’s media-generating capabilities continue to advance rapidly, both in terms of the quantity produced and the quality of the final product. This presents some particularly vexing issues for those of us in digital forensics and eDiscovery. Next week, I am heading to the Washington, D.C., area to discuss these issues at the International Legal Technology Association’s ILTACON 2025.
The session I’m speaking on, “Unmasking Synthetic Media and Deepfakes,” begins with an overview of how AI models create synthetic media. For example, image generation tools like DALL-E or Midjourney generally rely on diffusion models that convert random noise into recognizable images. To generate a new image, the AI predicts the most likely value for each pixel based on the millions (or more) of images and related text prompts it is trained on. The technology works similarly for AI-generated videos, but in 3D space rather than 2D.
As you’ve probably noticed, these models can produce convincing images and videos. However, they can include glaring errors or aberrations. They are notoriously bad at rendering text, for example, as well as low-level details such as fingers or the leaves of a tree. The model can’t verify its result against an external source of truth because it’s creating a wholly new image that has never existed before. Instead, it offers its best guess at what the image should look like, leading to these errors.
That being said, AI models improve every day and can now produce images and videos that are nearly indistinguishable from reality. Take a look at the “Will Smith Eating Spaghetti test” as an example — an AI-generated video of the actor enjoying some pasta went viral in 2023 for its bizarrely bad rendering. Just two years later, in 2025, the same prompt in Google’s Veo 3 AI tool provides almost documentary-like footage.
For digital forensics and eDiscovery practitioners, this creates a slew of obvious problems. How can you prove that a particular image, video, or even document is not AI-generated? For years, we have had AI detectors, but they have never been perfect, and as synthetic media gets closer to authentic media, that technology will need to improve to remain useful.
So what can we do? Well, that’s what we’ll be speaking about at ILTACON next week. Without giving too much of the presentation away, I can say that there is no silver bullet for determining whether something is AI-generated. We have some techniques and advice to share for forensics and eDiscovery professionals, but the best practice is to stay abreast of AI technology and its evolution to be aware of the minefield that is synthetic media and deepfakes.
If you’ll be at ILTACON in National Harbor, MD, next week, please reach out so we can connect. And if you won’t be, but you’re interested in this conversation, please let me know.
David Greetham is a tenured digital forensics expert and is the vice president of digital forensics at Level Legal. He has testified as an expert on numerous occasions both nationally and internationally. He has acted as a joint neutral expert on many occasions, specifically in the areas of digital forensics analysis, information governance and eDiscovery methodologies.
David has teaching and consulting experience at the White House, Harvard University, and New Scotland Yard. He serves as an executive advisor to EDRM and has been featured in CIO Magazine.