Top

Manish Tewari | A standalone law to manage AI is just what India needs

India's tryst with regulating the cyber-civilisation commenced with the Information Technology (IT) Act in the year 2000.

The inaugural Global Artificial Intelligence (AI) Safety Summit held at Bletchley Park, United Kingdom, brought together representatives from 28 countries, academia, and industry leaders to discuss the ethical development and responsible use of artificial intelligence (AI). An acme that was long overdue.

The summit overlapped with President Joe Biden’s executive order on “safe, secure, and trustworthy artificial intelligence”. This aims to establish new standards for AI safety in the civilian domain and provide leadership by the United States in a realm that could mark the advent of a Fifth Industrial Revolution.

The US presidential executive order “require[s] that developers of the most powerful AI systems share their safety test results and other critical information with the US government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public”.

The first Industrial Revolution fundamentally transformed the world by enabling transition from agrarian and manual labour to machine-driven manufacturing. Similarly, the AI revolution holds the potential for a comparable paradigm shift. AI’s ability to automate tasks, analyze vast amounts of data, and drive innovation in critical areas will mirror the revolutionary impact of mechanisation during the Industrial Revolution that not only transformed the means of production but changed the fundamental structure of society simultaneously.

While President Biden’s executive order and the Bletchley summit was primarily focussed on regulating civilian uses of AI, the United States Department of Defence (DOD) concurrently unveiled an ambitious plan to embed AI into the lifecycle of their military eco-system. China is already working on moving from an informationised military to an ‘intelligentised’ force by deploying AI systems. At what point in time would a cyber attack (that now will become even more sophisticated due to enablement by Artificial Intelligence) qualify as an ‘act of war’ meriting a conventional response still remains a riddle wrapped in an enigma and packaged in a mystery. Strategic thinkers are still grappling with this question.

It is the realm of both offensive and defensive capability that AI would present the most complex challenges. The mating of AI with offensive weapons capability exponentially raises not only the risks from conventional chemical, biological, and nuclear weapons but adds another lethal dimension to this framework of destructive capacity — Artificial Intelligence enabled autonomous weapons. India for some strange reason voted against the UN resolution calling for upon the international community to address the challenge presented by lethal autonomous weapons.

In 2018, Niti Aayog’s report entitled “National Strategy for Artificial Intelligence” highlighted six key challenges to the adoption of AI in India: (i) lack of a data ecosystem, (ii) low intensity of AI research, (iii) lack of expertise (iv) high resource cost for adopting AI in business processes, (v) lack of proper regulations and privacy regime, and (vi) unattractive intellectual property to incentivise research and adopt AI. Moreover AI will also have the responsibility of addressing the growing digital divide both on the urban-rural and across class matrices.

This challenge is further exacerbated by generative AI, which has allowed the generation of deep fakes and institutionalisation of both misinformation and disinformation that more often than not is perceived by people as authentic. It impacts both the capacity to make informed choices based upon accurate information as well the makes people susceptible to blatant mental manipulation.

India’s tryst with regulating the cyber-civilisation commenced with the Information Technology (IT) Act in the year 2000. This led to a semblance of superintendence over the digital ecosystem. The IT Act was heavily influenced by the UNCITRAL Model Law on E-Commerce.

Subsequent amendments such as the 2008 amendment tried to tackle the issue of cyber-terrorism, online scams necessitating the inclusion of strict penal provisions including some that clearly impinged on civil liberties and had to be read down by the Supreme Court.

This 22-year-old law was created during an era when the internet was still in its nascent stage, and it has therefore become both antiquated and incapable of addressing contemporary challenges.

India, therefore, requires separate laws to govern different arenas of the cyber-civilisation, i.e. hardware, software, E-Commerce, OTT platforms, news outlets, online education, online gaming and crypto-assets and Artificial Intelligence. The list is illustrative and not exhaustive.

The perceived preference of the current dispensation seems to be to bring an omnibus legislation that would be generic in nature — an overarching digital bill. A one-size-fits-all approach fleshed out by rules and regulations framed by the executive as is the case with the Information Technology Act. This approach would no longer work.

India needs to diversify its cyber-civilisation legislation architecture. The European Union provides an interesting template. It has a standalone Digital Services Act to regulate e-commerce, General Data Protection Regulation to superintend data privacy and an AI Act to govern its AI domain. Such an approach helps in creating much more effective and focused legislation.

India’s emphasis should also be on prioritising ‘deep thinking’ to determine and regulate the damage of “Frontier AI”. The Bletchley Declaration defines Frontier AI as those AI models that could pose a severe risk to public safety. It is similar to President Biden’s executive order. India should also require AI systems to regularly share their test results and critical information with Indian authorities to protect the interest of the people at large from both ‘mental and physical harm’.

The imperative for India, therefore, is to formulate a comprehensive standalone Artificial Intelligence (AI) law. This is the only way to mitigate risks associated with AI, and pave the way for responsible and seamless integration of AI technologies into the existing but evolving digital civilisation.

As for the use of AI for offensive weapons capability or for other generic military purposes, that should be the subject matter of an entirely different legislation.

Next Story