The Rise of Deepfakes: AI-Generated Fake Content Raises Concerns
A popular Indian actor entering an elevator in revealing clothes. Football fans in a stadium in Madrid holding an enormous Palestinian flag. A video of Ukrainian President Volodymyr Zelenskyy calling on his soldiers to lay down their weapons. The pope wearing a Balenciaga puffer jacket. These unrelated events have something in common: they never happened. And yet, they were some of the most viral pieces of content on various social media platforms.
The advancements in artificial intelligence (AI) over the past year have led to the creation of platforms that allow almost anyone to create persuasive fakes by entering text into popular AI generators that produce images, videos, or audio. This has given rise to the phenomenon of deepfakes, AI-generated fake content that has become a menace in our polarized and divided online ecosystem.
Deepfakes have raised concerns among lawmakers worldwide, prompting big tech companies like Meta and Google to announce measures to tackle content produced using this technology. However, there are still loopholes in these systems that are being exploited by those who wish to spread such content. Entire pornographic sites featuring deepfakes of popular actors have emerged, and the technology has also raised concerns about election integrity.
One recent example is the deepfake of actor Rashmika Mandanna, which has gone viral on platforms like Instagram. In this deepfake, her face has been morphed onto a video of a woman entering a lift wearing revealing clothes. While there are moments where one can identify that the video is not genuine upon closer inspection, it may not be obvious to someone who is not paying close attention. This highlights that deepfake technology poses a greater risk for women, who already face hostility on online platforms. Deepfakes add a new dimension to the ways in which women can be harassed on the internet.
Actor Amitabh Bachchan has called for legal action against the deepfake of Mandanna, and Union Minister of State for Information Technology Rajeev Chandrasekhar has emphasized the need for online platforms to address the dangers of deepfakes. However, deepfakes have not yet reached a level where they look entirely genuine. Nevertheless, the possibility of AI-generated misinformation has left a psychological impact and has even led some commentators to dismiss genuine content as altered through artificial intelligence.
The issue of deepfakes has become particularly prominent in the context of the Israel-Gaza conflict. Internet platforms like X, Facebook, and YouTube have been flooded with AI-generated content from both sides, showcasing the destruction caused by the conflict. Despite the banning of Hamas-linked accounts, these platforms continue to be inundated with falsehoods about the conflict, eroding trust among users.
Recognizing the global concerns surrounding AI, 28 major countries, including the United States, China, Japan, the United Kingdom, France, India, and the European Union, recently signed a declaration at the world’s first AI Safety Summit held at Bletchley Park. The declaration acknowledges the significant risks posed by AI, including intentional misuse, control issues, and risks related to cybersecurity, biotechnology, and disinformation. This declaration highlights the need for global action to address the potential risks of AI.
As deepfakes continue to pose challenges to online platforms and trust continues to fray, it is crucial for regulators and tech companies to work together to develop effective solutions. The rise of deepfakes underscores the importance of addressing the ethical and security implications of AI technology, ensuring a safer and more trustworthy online environment for all users.