Detecting Deep Fake Technology
Deep fake technology is a startling new innovation that has been able to fool millions of people around the globe.
Deep fakes are doctored videos that leverage highly accurate machine learning and artificial intelligence technologies.
From videos of Vladimir Putin to Mark Zuckerberg, deep fake media is very hard to detect and can be made to have the subject say or do whatever the programmer desires.
Historically, deep fakes have required large amounts of data, but some new deep fake techniques can work from a single picture.
The war to combat this is just beginning and will inevitably become a global effort. Technology makes our world a better and safer place, but, with everything in life, there is always a downside to progress.
When first considering how to combat this problem, we must accept that there are some major potential barriers to deep fake detection.
Deep fake detection is really an issue of data.
In order to detect deep fakes, you need to have more expansive, higher quality data than the people that are generating them.
In the past, there were useful "tricks" like looking for the absence of blinking, but the technology has gotten much smarter, and it's not that easy anymore.
The latest detection techniques now work by using small quirks that are unique to individuals, such as the way that a specific person might raise their eyebrows when they pronounce certain words.
Not only is such a thing incredibly hard to detect and capture in an algorithm, but there is no reason that deep fakes will not eventually incorporate the same technology in their generation, thus making the detection technique obsolete.
Again, since this is a problem with data, the solution will likely come from large companies like Google, Facebook, etc. that have access to billions upon billions of photos.
Not only is acquiring similar data expensive if not impossible, but finding a place to store it and work with it computationally is probably even more expensive.
This poses significant restrictions on those in academia wishing to take on this problem.
It's unfortunate that these companies are allowed to even have so much of our data in the first place (often collected without consent), but I digress.
Notably, deep fakes will likely become undetectable (at least with any reasonable accuracy) in a very short time.
The photo below demonstrates the progression of state-of-the art facial generation methods from 2014 to 2018.
While the photos generated in 2014 were terrible, the photos generated in 2018 are practically indistinguishable from real people.
Four more years of progress will mean the development of algorithms that can perfectly capture every element of what makes a photo legitimate. Once that happens, detection will cease to be possible, and it will likely become a legislative issue rather than a technological one.
The question will shift from "how do we detect these?" to "should this technology be legal in the first place?"
In summary, deep fake detection is much like a bug/patch relationship.
However, bugs are just that—bugs.
Though some may require a lot of work, there's no "perfect" bug that can't be patched/fixed eventually. In this case, there will eventually be a "perfect" deep fake that is impossible to detect.
The very nature of how they work (i.e. via the co-evolution of a generator and discriminator) does make very clear why they'll eventually be undetectable.
Anytime some characteristic that makes deep fakes detectable is published, it can very easily be incorporated into better deep fake models. There is ultimately a ceiling on how "real" an image can be.
Written by Daniel M. DiPietro & Edited by Michael Ding & Alexander Fleiss