Detecting deepfakes only makes them better

Those of you interested in my parenting posts: this one’s a bit technical. It’s not about parenting. But it’s important nonetheless. And the topic will affect children.

“Deepfakes” is a term that describes sophisticated fake artifacts generated by AI. You can read a good overview on Wikipedia. You can also see images, facial videos, and full-body videos online everywhere. A great example is this video putting words in Barack Obama’s mouth. If you’re ambitious, you can even create a deepfake yourself.

There is legitimate fear of this technology. An important implication is the increasing difficulty of distinguishing fake news from real. This has consequences for the upcoming US presidential election in 2020. If you thought fake news was bad last election, wait until you can’t tell it’s fake. There are many other risks, too. For example, thieves used deepfake audio to impersonate a CEO and steal $234,000.

An AI artist strengthened by its harshest critic

Because of such risks, Facebook and Microsoft announced a contest to detect deepfakes. Facebook is contributing $10 million to this “Deepfake Detection Challenge.” But there’s a problem. Because of the way you create deepfakes, detecting them will only make them better.

The reason is that deepfakes use a technology called generative-adversarial networks (GANs). GANs use two neural networks to battle each other, a generator and discriminator. The generator is like an artist. It tries to create something that looks real. The discriminator is like a critic. It tries to distinguish what’s real from what’s fake. Over time, the generator learns to fool the discriminator. At this point, it can usually fool humans too. Then you can use the model it built to fool the discriminator to create deepfakes.

Now see the problem? Since you use GANs to create deepfakes, better detection improves creation. You defeat the detection by incorporating it into your discriminator. Then let the generator keep trying to fool the updated discriminator until it does. And at that point, the detection method no longer works. The harsher the critic, the better the artist.

Blockchain-based authentication offers an alternative

What’s the alternative? Some kind of authentication approach for media. It could be blockchain-based, like the startup Amber is building. We could even see a resurgence of trust in mainstream media. They may apply technology like Amber’s to guarantee that everything they show is real.

So why isn’t Facebook pursuing an authentication approach? I don’t want to speculate. Researchers more well-versed in deepfakes may believe detection is solvable. But some motivation must come from Facebook’s reluctance to police its platform. It’s hard to be proactive and authenticate content that billions of people publish. It’s much easier for Facebook to unleash an algorithm. Even if it’s a temporary victory.

But I predict that the detection challenge will have minimal or negative impact. We have 424 days to figure out a better alternative. Until then, don’t trust anything you see or hear until people you trust verify it.

Join the Conversation

1 Comment

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: