Deepfakes are synthetic media—videos, images, or audio clips—that are generated using AI and machine learning to realistically depict someone saying or doing things they never actually did. Globally, their misuse has surged. In South Korea, for instance, a major 2024 scandal revealed that high school students were generating explicit deepfake images of classmates and teachers. According to police data, over 800 deepfake sex crime cases were reported in South Korea by the end of September 2024, a dramatic increase from just 156 cases in 2021. These incidents triggered widespread outrage and pushed legislators to pass stringent laws criminalizing the mere possession or circulation of such content.
In India too, the threat is real and growing. In October 2023, a chilling case emerged in Mumbai where a minor girl was harassed after her morphed and AI-generated images were circulated on social media. The Maharashtra Cyber Cell revealed that such cases involving minors have increased by over 20% year-on-year, with deepfakes and AI-generated voice cloning being used to blackmail victims. Similar incidences have been seen in various parts of the country.
The Indian Legal Landscape and Judicial Standpoint
India does not yet have a dedicated law for deepfakes, but certain provisions of existing laws offer partial protection. The Information Technology Act, 2000, especially Sections 66E (violation of privacy), 67 (publishing or transmitting obscene material), and 67A (sexually explicit material), have been invoked in cyberbullying and deepfake-related cases. In addition, The Bharatiya Nyaya Sannita BNS have sections on voyeurism, defamation, and criminal intimidation through anonymous communication can be applied depending on the context.
The Supreme Court of India, in the landmark case of Shreya Singhal v. Union of India (2015), struck down Section 66A of the IT Act for being vague and overbroad, while upholding the right to free speech under Article 19(1)(a). However, it also emphasized that speech causing incitement to commit an offence would not be protected. This balance is critical in the current discourse around AI misuse, where freedom of expression and digital innovation must be harmonized with privacy and dignity.
Moreover, in the Justice K.S. Puttaswamy v. Union of India (2017) judgment, the Supreme Court recognized the right to privacy as a fundamental right under Article 21. This ruling laid the foundation for stronger arguments against non-consensual use of personal data and imagery—especially pertinent in deepfake-related abuse cases.
Impact on Children and Young Users
The psychological and emotional consequences of AI-facilitated cyberbullying are severe, particularly for children. Victims often suffer from anxiety, depression, social withdrawal, and a lasting sense of helplessness. Another alarming trend is the use of deepfakes for extortion. According to India’s National Crime Records Bureau (NCRB), cases under cyber blackmailing and threatening rose by 32% between 2021 and 2023, many of which involved digitally altered images. A disturbing pattern has emerged where minors are being targeted, either by peers or by online predators, who manipulate content to demand money or sexual favors.
Global and Indian Responses
Globally, institutions are beginning to act. In the U.S., the “Take It Down” Act aims to protect minors from non-consensual intimate image sharing, including AI-generated pornography. Similarly, the UK and South Korea have updated their cybercrime laws to include AI misuse explicitly.
In India, there has been growing advocacy for a dedicated Digital India Act, which is currently under drafting and expected to replace the outdated IT Act. The proposed law is anticipated to address emerging technologies like AI, deepfakes, and algorithmic content manipulation. Additionally, bodies like the National Commission for Protection of Child Rights (NCPCR) have issued guidelines for schools to educate children on digital safety and to report any instance of cyberbullying or AI-enabled abuse.
Protective Measures and the Way Forward
To protect children and young users from AI-facilitated cybercrimes, a multi-pronged approach is essential. Education and digital literacy must be the first line of defense. Children should be taught not only how to identify AI-generated content but also how to respond if they or someone they know falls victim to it. Schools should integrate digital ethics and cybersecurity awareness into their curriculum.
Moreover, robust reporting mechanisms are vital. Platforms like Instagram and YouTube need stricter AI detection tools and quicker content takedown procedures. India can look toward models like Australia’s eSafety Commissioner, which provides dedicated support and redressal for cyberbullying and image-based abuse.