Following meetings with tech companies on 22-23 November, Union IT minister Ashwini Vaishnaw and minister of state for IT Rajeev Chandrasekhar issued the advisory. The move is in response to a series of deepfake incidents targeting prominent actors and politicians on social media platforms.

“Content not permitted under the IT Rules, in particular those listed under Rule 3(1)(b), must be clearly communicated to the users in clear and precise language, including through its terms of service and user agreements; the same must be expressly informed to the user at the time of first registration, and also as regular reminders, in particular, at every instance of login, and while uploading or sharing information onto the platform,” the ministry said.

Intermediaries will also be required to inform users about the penalties that will apply to them, if they are convicted of perpetrating deepfake content knowingly. “Users must be made aware of various penal provisions of the Indian Penal Code 1860, the IT Act, 2000 and such other laws that may be attracted in case of violation of Rule 3(1)(b). In addition, terms of service and user agreements must clearly highlight that intermediaries are under obligation to report legal violations to law enforcement agencies under the relevant Indian laws applicable to the context,” it added.

Rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, state that intermediaries, including the likes of Meta’s Instagram and WhatsApp, Google’s YouTube, and global and domestic tech firms, including Amazon, Microsoft, and Telegram, must forbid users “to not host, display, upload, modify, publish, transmit, store, update or share any information that deceives or misleads the addressee about the origin of message, or knowingly and intentionally communicates misinformation, which is patently false, and untrue or misleading in nature”.

On 13 December, Chandrasekhar, in an interview with Mint said the Centre was to issue an advisory, and not a new legislation, urging firms to comply with existing laws on deepfakes. “There is no separate regulation for deepfakes. The existing regulations already cover it under Rule 3(1)(b)(v) of IT Rules, 2021. We are now seeking 100% enforcement by the platforms, and for platforms to be more proactive—including alignment of terms of use, and educating users of 12 no-go areas—which they should have done by now, but have not. As a result, we are issuing an advisory to them,” he added.

The ministry will observe compliance with the advisory, for a period. “If they still do not adhere, we will go back, and amend the rules to make them even tighter to remove ambiguity.”

Although tech firms have internal policies promoting caution and discouraging the spread of malicious content, intermediary platforms benefit from an immunity from prosecution for such content. Experts flagged it as a major concern.

“Due to the core nature of technology it is nearly impossible to trace cyber attackers generating malicious content—with endless ways to obfuscate digital footprint. The regulations will be a deterrent for the masses, but the onus will lie upon tech firms to use their sophistication in AI to proactively monitor their platforms,” a senior policy consultant working with several tech companies, said.

The issue of deepfakes rose to prominent public discourse after multiple morphed videos of actors emerged on social media. Last month, addressing a virtual event under G20, prime minister Narendra Modi highlighted the issue as well. “The world is worried about the negative effects of AI. India thinks that we have to work together on the global regulations for AI. Understanding how dangerous deepfake is for society and individuals, we need to work forward. We want AI should reach the people, it must be safe for society,” he said.

India, in this regard, has spoken about regulating AI in order to curb harm. After becoming a signatory of the Bletchley Park Declaration at the UK AI Safety Summit on 1 November, India’s New Delhi Declaration saw consensus among 28 participating nations, including the US and UK, as well as the European Union, on reaching a global regulatory framework that will seek to promote the use of AI in public utilities, while curbing the effects of harm that can be enforced using AI.



Source link