Manish Tewari | We need legal framework to regulate AI, deep fakes

Rise of Deep Fakes Spurs Global Legislative Action, India Faces Growing Threat

Update: 2024-02-10 20:01 GMT
Anand Mahindra's representational image which is an Artificial Intelligence-generated (AI) deep fake video to warn people about the dangers of such content. (DC File Image)

The rise of deep fake videos targeting all genres of people has raised alarm bells, prompting widespread concern and an urgent need to curb the misuse of deep fake technology. The United States Senate has introduced bipartisan legislation called the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act. This law would make it illegal to share non-consensual, sexualised images, and videos created by artificial intelligence without the individual’s consent. Similarly, the European Union’s proposed Artificial Intelligence Act aims to provide strict regulations to address deep fakes. As this technology becomes more available to the public, an increasing number of countries are grappling with the dilemma of deep fakes and how to tackle this issue.

In a policy brief for the International Centre of Counter-Terrorism, entitled the ‘Weaponisation of Deep Fakes Digital Deception’, the far right Ella Busch and Jacob Ware stated “the words, ‘deep fake’, are derived from the concept of deep learning, a subset of artificial intelligence-machine learning (AI/ML). Deep learning algorithms are composed of deep neural networks, which simulate the human brain in a way that enables the AI to ‘learn’ from large amounts of data. Deep learning is unique from machine learning for its ability to process unstructured data, such as text and images, which make it ideal for creating deep fake videos, audio, images, or text. Deep fakes are created using a specific deep learning algorithm called a generative adversarial network (GAN). GANs, first developed by researcher Ian Good fellow in 2014, consist of two neural networks — a generator algorithm and a discriminator algorithm. The generator algorithm creates a fake image (or other form of media) and the discriminator judges the media’s authenticity. The action repeats for hours, or even days, until reaching a stable state in which neither the generator nor the discriminator can improve their performance.”

In other words, deep fakes are a type of synthetic media that involves the manipulation of someone’s facial appearance through deep-generative methods. Although the technique of mimicking someone’s face or voice isn’t new, what is revolutionary about deep fake is its reliance on Generative Adversarial Networks (GANs). GANs rely on two AI models that work in tandem. The first AI model, the forger, helps in manipulating someone’s face based on the provided sample. The second AI model, the detective, scrutinises the forger’s creation and, based on Face Training Data, identifies all the reasons why it's fake. The forger then makes improvements based on the inputs provided by the detective. This process of improvement continues until the detective is unable to find any difference between the real image and the generated image.

Like any technology, deep fakes have both positive and negative uses. Filmmakers are extensively using this technology to translate films, age/de-age actors, help speed film production and reduce its costs. In the field of education, deep fakes can help in creating much more innovative lessons which are far more engaging than the traditional form of learning. It can be used to re-enact historical moments; help understand human anatomy and assist in architecture. It can also be used by medical researchers to develop new ways of treating diseases without actual patients. Deep fakes have numerous benefits to offer.

On the other hand, deep fakes are extensively being used to commit crimes such as financial scams, spread misinformation, and share sexually explicit videos. According to Europol’s Innovation Lab, deep fakes could become a staple tool for organised crimes, as they can be extensively used for bullying, allowing intellectual property violation, facilitating fraudulent documents, manipulating evidence, supporting terrorism, and fostering social unrest and political polarisation. There have been reports that deep fakes were employed in the last Assembly elections to malign opponents, and if no steps are taken, they may be widespread during 2024 general elections.

This writer, himself, was a victim of a deep fake video impersonating his voice that was maliciously circulated the day before polling in the 2019 parliamentary elections. Fortunately, the impersonation was so crude and amateurish that it got caught out and exposed before it could do substantial reputational and electoral damage. Nonetheless, the deep fake did end up creating apprehensions in the minds of a certain section of the electorate.

In response to an unstarred question raised in the Rajya Sabha regarding the regulation of AI and deep fakes, the ministry of information and broadcasting stated that it currently relies on Section 469 of the IPC (forgery for the purpose of harming reputation) and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021), to address the issue of deep fakes. These rules impose an obligation on social media intermediaries to ensure the expeditious removal of prohibited misinformation, patently false information, and deep fakes. Unfortunately, these rules fail to effectively address the issue of deep fakes. Moreover, these rules are highly ambiguous and generic, lacking substantial measures to regulate deep fakes. Even the proposed Digital India Bill fails to provide any provision or principle on which the ministry intends to rely while regulating deep fakes. Thus, there exists a legal vacuum which results in the failure of properly regulating deep fakes.

This vacuum has resulted in a rise of AI and deep fake-related crime in India. According to McAfee’s survey on AI voice scam, about 47 per cent of Indian adults have experienced some form of AI voice scam which is almost double the global average of 25 per cent. Moreover, according to the UK-based Sumsub Identity Fraud Report, India has witnessed 1,700 per cent growth in the number of deep fakes. This clearly shows that India is severely unequipped and unprepared to keep pace with the rapid advances in technology.

The first Industrial Revolution fundamentally transformed the world by enabling transition from agrarian and manual labour to machine-driven manufacturing. Similarly, the AI revolution holds the potential for a comparable paradigm shift. If India wants to reap the benefits of this AI revolution, then it must ensure that it is prepared to tackle the upcoming challenges.

Europol estimates that by the year 2026, nearly 90 per cent of the online content may be synthetically generated. This shall significantly alter the way we interact with one another, consume information and spend time on the internet. Similar to the United States, and the EU, India also needs a law to tackle the rise of artificial intelligence and deep fakes.

The next Parliament should suo motu constitute a joint committee of Parliament to address this issue, and ensure that proper consultations are carried out with all concerned stakeholders. The objective should be to draft a comprehensive legislation that takes into account the perspectives of all stakeholders, and helps shape a law that would tackle the challenge of deep fakes before it becomes portentously pervasive.

Similar News