The Minuteman

The Official Newark Academy Newspaper

Generative AI and the Proliferation of Deepfakes: Is it already too late?

Rishi Bala ’25, Editor-in-Chief

Deepfakes depicting the Mona Lisa in action. (Photo courtesy of Artnet News)

This past February, OpenAI, the world-famous artificial intelligence laboratory that created ChatGPT, shocked the world when it posted videos from its unreleased generative artificial intelligence (AI) program, Sora. These synthetic clips, often starring real people, featured incredibly believable scenes produced by AI using simple text prompts, a form of content known as deepfakes. Deepfakes are not new — they were around long before generative AI became popular and consumable, but with programs like Sora, it’s never been easier for the average person to create hyperrealistic, fake content. 

Research from Eftsure has shown that in 2023, there was a 1000% increase in the number of deepfakes detected globally across all industries, producing larger implications for countries and corporations around the world. In the midst of an important election year, generative AI and deepfakes have never been more prevalent — one search on Instagram shows videos of Hollywood actor Will Smith and former President Donald Trump eating spaghetti together, while one scroll on TikTok may show clips of Trump and President Biden playing Minecraft together. Dissemination and digitally-created content with real-world implications is everywhere — there’s been clips claiming Biden plans on attacking Texas, images of Trump and Biden holding guns, and posts that prominent figures were endorsing candidates they had not.

The line between fact and fiction is blurring at a genuinely alarming rate. The uncharted territory that this election cycle is entering is only a sign of what’s to come. By the 2026 midterm elections, AI will be even more advanced and even more hyperrealistic, with political and economic narratives specifically tailored to the biggest fears and desires of each voter. At the same time, AI-detection technology is struggling to keep up with the rapidly expanding growth of generative AI. Reports by Vanity Fair indicate that in November 2022, when ChatGPT was introduced to the public, AI-detection technology was able to distinguish between content made by AI and content from humans with 95% accuracy. Now, however, that number has fallen to 39.5% accuracy.

With this rapid proliferation of deepfake technology, countries and companies are beginning to crack down on ending harmful AI content. In early September, California lawmakers approved legislation designed to ban deepfakes, protect workers, and regulate AI. California could potentially become the first state to introduce large-scale safety measures on AI models, requiring developers to start disclosing what data they use to train their models in order to shed more light into how AI models work and prevent future catastrophic disasters. YouTube, too, is looking to improve its generative AI countermeasures, introducing new detection processes and algorithms. Internationally, countries such as South Korea are investigating illegally created deepfake content and distribution. 

In the midst of all this, it’s important to remember that not everything we see or read is correct, especially with the mass amounts of synthetic, misleading content propagated all over social media. Newark Academy student Michael Wyche ’25 says, “I go on TikTok and I see videos of Tom Cruise dancing to K-pop and clips of former President Barack Obama playing ‘League of Legends’.” NA senior Brody Linenberg ’25 adds, “It’s everywhere, and even though it’s often funny, I find myself questioning if the stuff I see is real or if it’s produced by an algorithm.” 

Still, Generative AI and synthetic media isn’t all bad — there is enormous potential for it to revolutionize industries like education, research, medicine, and marketing. However, at the same time, in the hands of the wrong person, there is immense danger for the dissemination of false or misleading content.