Digital tools that allow the dark side of individuals to emerge will expand and are impossible to stop.
2 December 2024 (Los Angeles, CA) — As it turns out, deepfakes are a difficult problem to contain. Wow. Who knew?
As victims from celebrities to schoolchildren multiply exponentially, USA Today asks, “Can Legislation Combat the Surge of Non-Consensual Deepfake Porn?”
Journalist Dana Taylor interviewed UCLA’s John Villasenor on the subject. To us, the answer is simple: Absolutely not. As with any technology, regulation is reactive while bad actors are proactive. Villasenor seems to agree. He states:
“It’s sort of an arms race, and the defense is always sort of a few steps behind the offense, right? In other words that you make a detection tool that, let’s say, is good at detecting today’s deepfakes, but then tomorrow somebody has a new deepfake creation technology that is even better and it can fool the current detection technology. And so then you update your detection technology so it can detect the new deepfake technology, but then the deepfake technology evolves again.”
Exactly. So if governments are powerless to stop this horror, what can? Perhaps big firms will fight tech with tech. The professor then dreams (hallucinates?):
“So I think the longer term solution would have to be automated technologies that are used and hopefully run by the people who run the servers where these are hosted. Because I think any reputable, for example, social media company would not want this kind of content on their own site. So they have it within their control to develop technologies that can detect and automatically filter some of this stuff out. And I think that would go a long way towards mitigating it.”
Sure. But what can be done while we wait on big tech to solve the problem it unleashed? Individual responsibility, baby!! :
“I certainly think it’s good for everybody, and particularly young people these days to be just really aware of knowing how to use the internet responsibly and being careful about the kinds of images that they share on the internet. Even images that are sort of maybe not crossing the line into being sort of specifically explicit but are close enough to it that it wouldn’t be as hard to modify being aware of that kind of thing as well.”
Great, thanks. Admitting he may sound naive, Villasenor also envisions education to the (partial) rescue:
“There’s some bad actors that are never going to stop being bad actors, but there’s some fraction of people who I think with some education would perhaps be less likely to engage in creating these sorts of disseminating these sorts of videos.”
Our view is that digital tools allow the dark side of individuals to emerge and will just continue to expand, unabated. It is the nature of technology.
We’ve been writing about this for years. One of the first people we noted was way back in 2019 (is 2019 way back?) in an article entitled “CEO of Anti-Deepfake Software Says His Job Is Ultimately a Losing Battle”. That CEO described what was an unsolvable problem. Manipulated content may be in the category of the Millennium Prize Problems, just more complicated. He noted:
“Ultimately I think it’s a losing battle. The whole nature of this technology is built as an adversarial network where one tries to create a fake and the other tries to detect a fake. The core component is trying to get machine learning to improve all the time. Ultimately it will circumvent detection tools”.
Alas, neither of these articles published this observation by Jorge Luis Borges, made in an interview with the Paris Review in 1967:
“Really, nobody knows whether the world is realistic or fantastic, that is to say, whether the world is a natural process or whether it is a kind of dream, a dream that we may or may not share with others, create for others, unrestricted”.
Considering the almost daily advances of AI technology in contrast to the development of legislation, it’s easy to see why this battle is lost. And given how the use and development of generative AI technology has simply ballooned in recent months, it will simply outpace all potential regulation.
It’s just the DNA of technology. We are well past building safeguards against malicious or unethical uses.