Misuse of an AI Chatbot: a Global Wake Up Call
Artificial intelligence (AI) has become the modern “legal executive” for many legal practitioners, assisting in tasks ranging from contract review to streamlining due diligence. Yet as these technologies become increasingly widespread and sophisticated, a parallel and troubling trend has emerged: the misuse of AI-driven deepfakes. AI’s ability to generate seemingly realistic but falsified images, audio and video creates serious risks. When exploited for malicious purposes such as cyberbullying, scams and the creation of non-consensual intimate images, deepfakes can inflict profound and lasting harm to individuals.
A recent global incident brought these issues to the forefront. An AI chatbot reportedly allowed users to generate non-consensual sexual images by uploading photos of real individuals, including women and children, and prompting the chatbot to digitally “undress” them. The resulting harm is compounded by the fact that such fabricated images, once disseminated online, could be exceptionally difficult to remove and have lasting consequences for victims. It was reported that within 11 days, an estimated 3 million sexualised images were generated.
The incident is an important wake‑up call, illustrating how deepfake technologies can be weaponised through the misuse of personal data. It triggered swift regulatory investigations by various privacy or data protection authorities worldwide, and bans across multiple jurisdictions. My Office promptly wrote to the relevant organisation to express concerns and make enquiries. We also issued a media statement to remind the public to use AI safely and lawfully.
Beyond intimate-image abuse, the use of deepfakes to perpetuate fraud and scams is an alarming trend. Hong Kong has already witnessed several high-profile cases. In a widely reported incident in 2024, fraudsters used deepfakes to impersonate a company’s chief financial officer during a video call and deceived an employee into transferring approximately HK$200 million. In 2024 and 2025, the Police dismantled two criminal syndicates that used deepfakes to commit fraud, involving nearly HK$400 million in total.
Regulatory Developments
Legislative efforts to address deepfake-related risks date back several years. In the Chinese Mainland, dedicated legislation on deep synthesis technologies, enacted in 2023, requires labelling of AI-generated content and prohibits using these technologies to produce content that harms the lawful rights and interests of others. Jurisdictions such as Australia and the United Kingdom have deployed online safety regimes to mitigate risks associated with deepfakes, such as cyberbullying and image-based abuse.
In Hong Kong, encouragingly, the Hong Kong Chief Executive’s 2025 Policy Address tasked the Department of Justice with co-ordinating the responsible bureaux to review the relevant law needed to complement the development and need for wider application of AI in Hong Kong.
Pending such review, Hong Kong retains a flexible regulatory approach whereby existing laws remain applicable, supplemented by relevant guidelines. Any collection and use of personal data to create deepfakes is subject to the requirements of the Personal Data (Privacy) Ordinance (PDPO). Specifically, the use of personal data to create and/or disclose deepfake materials may contravene Data Protection Principle (DPP) 3 of the PDPO if it goes beyond the original purpose of data collection. The requirements of DPP 1 may also be contravened if the personal data is collected on an unlawful or unfair basis. In more serious cases, the creation and/or disclosure of malicious deepfake materials may constitute doxxing offences under the PDPO.
Abuse of AI Deepfakes: Toolkit for Schools and Parents
Despite the rising trend of deepfake abuse, public awareness of the associated harms remains limited, particularly among children and parents. A 2024 survey in the United Kingdom revealed that almost two-thirds of children and nearly half of parents did not know or understand the term “deepfake”. Children are particularly vulnerable, not only as potential victims but also because their still-developing cognitive abilities mean they may create or share malicious deepfakes without fully understanding the risks or consequences.
Recognising these increasing risks, and well before the recent global incident involving an AI chatbot generating non‑consensual sexualised images, my Office proactively published the “Abuse of AI Deepfakes: Toolkit for Schools and Parents” (Toolkit) in December 2025 to provide practical recommendations to schools and parents, aiming to assist them in handling deepfake incidents involving children and young people, as well as safeguarding their privacy in relation to personal data. Key recommendations include minimising the public sharing of minors’ photos and videos, strengthening access controls and data security, and educating children and young people on the privacy risks and legal consequences of malicious deepfakes. The Toolkit also recommends steps for schools and parents in responding to deepfake incidents.
As recent events remind us of the vulnerability of minors in the fast-evolving age of AI and the tangible and far-reaching harms of malicious deepfakes, we must remain vigilant at all times. All stakeholders, including AI developers and service companies, have responsibilities to co-create a safe and trustworthy digital environment for future generations.