It’s taken years of awkward artificial intelligence-driven conversations with chatbots and oddly worded clickbait articles for generative AI to advance to where it is today. While it’s clear that AI will continue to improve, right now it isn’t always easy to distinguish AI-generated content from that created by humans. And nowhere should alarm bells sound louder than with deepfakes. AI-generated images present a significant risk to the integrity of audiovisual media and public trust. If people can’t believe their eyes or ears, what can they believe?
AI expert and esteemed author Hassan Taher shared his take on deepfakes, the growing risks, and actionable steps that must be taken to protect the integrity of audiovisual media and society as a whole.
What Are Deepfakes?
Explained Hassan Taher: “Deepfakes [are] artificially generated images or videos that convincingly replace a person’s likeness and voice.” In effect, it turns one person into another as far as the audience is concerned. Given the importance of video in swaying public opinion (and the approaching presidential race), many worry that bad actors could use deepfakes to show people doing and saying things they didn’t, creating scandal and doubt that could potentially sway public opinion. Government officials, large tech companies, researchers, advocates, and AI experts like Hassan Taher are working to develop solutions.
Deepfakes’ most notable predecessors hit the scene around 2017 when the pornography industry used a technology called deep neural networks to put celebrity faces on porn actresses’ bodies. This technology has since been used to create convincing videos of U.S. government building explosions and even then-President Donald Trump saying things he didn’t say. Some images went viral and even impacted U.S. financial markets.
“While some companies have used visible watermarks or textual metadata to indicate the artificial origin of an image, these measures are often easily defeated through cropping or editing,” Hassan Taher warned.
More recent deepfake techniques go far beyond face-swapping. Puppet-master deepfakes are like a digital puppet show that uses computerized images to make it look or sound like a real person or event.
Deepfakes: Not Necessarily Bad or Dangerous
Hassan Taher does emphasize that not all deepfakes are bad. This is a fascinating and potentially useful technology in the right hands. But, as the old saying goes, with great power comes great responsibility. The Global Alliance for Responsible Media defines misinformation as “verifiably false or willfully misleading content that is directly connected to user or societal harm.” Misinformation is the enemy, not advancing technology.
Voter Perception Tampering a Clear Risk
The stakes are exceptionally high when it comes to politics. Deepfakes have already infiltrated campaign materials, exemplified by Florida Gov. Ron DeSantis incorporating manipulated images into a campaign video. It’s easy to see the temptation, especially in politics.
“As the 2024 U.S. presidential election approaches, concern is escalating that deepfakes could be weaponized to distribute disinformation, thereby jeopardizing the electoral process and democracy itself,” wrote Hassan Taher.
Concerns linger that deepfakes could be utilized to depict fictitious closed polling stations, incite public fear through fabricated portrayals of violence, or even be wielded by foreign entities to meddle in the electoral proceedings of other countries, including the U.S.
Brands Face Damage
The commercial and retail spaces also have cause to be concerned. Brands face trust and financial damage when their ads appear on deepfake videos that spread misinformation. At the same time, competitors could use deepfakes to make it appear that company representatives are doing or saying objectionable things.
What’s more, brands must consider how their customers might use fake videos to support their brand or discredit competitors. Brand silence would equal complicity. Companies must monitor what their customers publish about the brand or competitors. This may open up new career opportunities, as advancing technology typically does.
Corporate Accountability
The business world recognizes that this could potentially impact the bottom line. Those in a position to do so are taking action. Google has unveiled SynthID.
“SynthID embeds an invisible digital watermark in the AI-generated image, which can be detected by specialized computer programs but remains invisible to the human eye,” noted Hassan Taher. “Google asserts that this watermarking technology is resilient against tampering, and thus could serve as an essential mechanism for curbing the spread of fraudulent images.”
But of course, this only works if people use the technology to create videos. Furthermore, Google’s SynthID is a paid program. People choose the program they pay for.
Additionally, those who really want to cause harm will find a way to remove watermarks, making this a problem that won’t have an easy “let’s create an AI detector” solution.
A coalition spearheaded by Microsoft is striving to establish a universal watermarking standard for AI-generated images. Even OpenAI, the organization that introduced Dall-E, a model that piqued widespread interest in AI-generated images, employs visible watermarks. These are just some examples of what Hassan Taher considers ethical and responsible AI development.
Nonetheless, these protective measures don’t extend to open-source AI generators, which can be manipulated for nefarious purposes. This lack of universality in safeguarding tools remains a formidable challenge.
The Global Alliance for Responsible Media has also introduced misinformation as a category within its guidelines. It emphasizes the importance of monitoring and demonetizing content that spreads misinformation. As deepfake technology improves, new detecting solutions, regulations, and public education will be necessary.
Solutions Are as Opaque as the Videos
Unsurprisingly, this watermarking technology is a well-kept secret to reduce the risk of (or at least slow) reverse engineering to bypass it. It’s easy to understand why secrecy is needed. Taher urges as much transparency as possible to maintain public trust and identify flaws or biases that may go overlooked if detection mechanisms are secret.
Of course, in the ideal scenario, TikTok, Facebook, YouTube, or X (formerly known as Twitter) could detect these videos as they’re uploaded to either give them a disclaimer or remove them from the platform. But this isn’t likely — at least, not for some time or with complete reliability. For years, social media platforms have been trying to detect hate speech and harmful content before it’s posted. What makes something hateful or dangerous is often nuanced and intentions can be veiled.
Detecting a malicious deepfake video is even more complex.
As the saying goes, fight fire with fire. Machine learning is how AI learns to do the astounding things it can. As machine learning enables AI to produce better deepfakes, AI must keep up by learning how to detect them. This isn’t a battle that human moderators alone can win.
There must be collaboration among brands, platforms, public advocates, regulators, educators, users, and technology vendors to win the war on deepfake-driven misinformation and retain public trust. The good news is this is happening. The bad news is that we likely have a long way to go.
“As AI technology continues to advance, the ethical implications become increasingly complex,” Hassan Taher pointed out. “The challenge lies not only in keeping pace with technological innovations, but also in fostering an ecosystem where truth is distinguishable from falsehood.”
Originally posted 2023-10-26 10:15:41.