The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 take effect on 20th February 2026. And apart from having a long name, it also carries obligations for social media platforms. See FAQs here.

Image: AI Generated
See how I labeled the image above?
The immediate risks are obvious – so obvious that the Government revealed that these new rules are basically to address this risk. And this risk is not new – atleast did not come with the advancements of AI models, but it did get more dangerous.
A Brief History of Deepfakes
Deepfakes are almost as old as the internet itself. It started in the 1990s with CGI. Atleast it was difficult to do then and was largely limited to face swapping.
But since 2014, generating deepfakes became easier. Ian Goodfellow introduced Generative Adversarial Networks (GANs), allowing two AI models to train against each other, leading to highly realistic, “deep” learning-based fakes.
The term “deepfake” itself was coined by a Reddit user in 2017. In case it’s not obvious, deepfake is a term given to content generated to depict someone doing something they are not actually doing.
Since 2018, techniques moved from niche forums to public awareness through viral videos (e.g., BuzzFeed/Jordan Peele). The number of online deepfake videos nearly doubled in nine months in 2018-19. Since 2020, tools evolved to include audio cloning (voice deepfakes) and text-to-video, leading to widespread use in scams, political misinformation, and entertainment.
The Crisis of Trust
Now, deepfakes have reached near-perfect realism.
No, I will not generate a video using Veo or Sora or anything else.
So much so that any video – literally any video – you see online, your first response should be “Is this AI?”. Literally every time.
That’s Why We Have Rules Now
In the US, the burden of proof is on the user. The content creator. Thanks to Section 230, which basically states that platforms are not liable for the content posted by users on their platform. It’s the biggest and most dangerous get out of jail card for the likes of Mark Zuckerberg and Larry Ellison (who now kinda owns TikTok US).
To quote, Section 230 “provides immunity to online platforms from civil liability based on third-party content as well as immunity for removal of content in certain circumstances.” (Source)
It’s good that that the new IT rules in India are actually putting the burden of proof on the platform. Not entirely, but the burden to verify whether the content is a deepfake is laid on the platform. And the burden to take it down if it is found to be a deepfake is laid on the platform. And the burden to keep users informed about the dangers of misusing AI.
These rules are expected to be enforced by the Government, particularly MEITY.
Why So Skeptical?
Who knew that in 2026 we’d say that AI is more dangerous in the US than it is in India.
It’s always good to start with intent and enforcement directives. But two big hurdles need to be crossed, based on recent history.
First, lobbying efforts and platform challenges in court need to be allowed, but only if platforms act to protect users’ interests. Recently Apple challenged the CCI for providing its global financials – which it should already be made available to the CCI in the first place. And don’t even start with X – every single takedown order has been challenged.
Secondly, implementation measures need to be backed by robust polices and procedures. The Government’s Sahyog portal (which itself was challenged by X) is a great start. It must be accepted by ALL platforms. And there must be strong penalties for falling foul of the three hour takedown timeline.
Big tech platforms have the resources to enforce this, it’s only a matter of intent.
