Newspaper Column
PCPD in Media
Privacy safeguards are vital for AI use - Privacy Commissioner's article in China Daily (March 2026)
Governments and businesses worldwide are seeking to harness artificial intelligence (AI) for innovation and economic growth. Yet as AI technologies become more accessible and sophisticated, a parallel and troubling trend is emerging: the misuse of AI-driven “deepfakes”. AI can generate seemingly realistic but falsified images, audio and video, which can inflict profound and lasting harm on individuals, especially children and young people, when exploited for malicious purposes.
Given the borderless nature of AI-related privacy risks, data protection authorities have stepped up coordinated efforts to advocate privacy-protective AI. In a landmark move, during the 47th Global Privacy Assembly (GPA) Conference in September 2025, 20 authorities from different jurisdictions, including the Office of the Privacy Commissioner (PCPD), signed the “Joint Statement on Building Trustworthy Data Governance Frameworks to Encourage Development of Innovative and Privacy-protecting AI”, advocating, among others, the incorporation of data protection principles into AI system development and the establishment of robust data governance.
In February, the PCPD, together with 60 privacy/data protection authorities from around the world (including Canada, France, Germany, Italy, Korea, New Zealand, Singapore and the United Kingdom), issued the “Joint Statement on AI-Generated Imagery and the Protection of Privacy”. Initiated and coordinated through the GPA’s International Enforcement Cooperation Working Group, which my Office co-chairs, the statement sets out fundamental international principles to guide organisations in developing and using AI content generation systems lawfully and safely. It reminds all organisations that develop and use AI content generation systems to ensure compliance with applicable data protection and privacy laws. The Joint Statement also recommends a series of measures to safeguard the fundamental rights of individuals, especially children and vulnerable groups.
Authorities in both the Chinese mainland and Hong Kong recognise that the development and use of AI must be accompanied by appropriate guardrails. Since the promulgation of the 2023 Global AI Governance Initiative, the equal importance of development and security of AI has been repeated stressed, which is also reaffirmed in the Hong Kong Chief Executive’s 2025 Policy Address. This balanced vision is further reinforced in our Country’s 15th Five-Year Plan, which targets advancing AI Plus Initiative across the board, while governance over AI must be strengthened. As the Plan specifies, it is essential to consolidate security in the course of development and pursue development in a secure environment, including strengthening data governance frameworks and rules, enhancing AI governance, and fostering an environment that is beneficial, secure and fair for development.
It is against this backdrop that the recent emergence of agentic AI warrants close attention, as it has already intensified concerns over data breaches and privacy and cybersecurity risks. Unlike conventional AI chatbots that primarily generate content in response to prompts, these agentic systems can connect with external tools and services, enabling them to take multi-step actions on behalf of users. The privacy risks posed by agentic AI thus extend far beyond the outputs of conventional AI chatbots. These systems can access, manipulate and expose personal data with unprecedented speed and reach. If such capabilities are misused to create and distribute abusive deepfakes with minimal human involvement, the resulting harms could spread more quickly and at greater scale.
Encouragingly, the 2025 Policy Address tasked the Department of Justice with coordinating different bureaux to review the relevant law needed to complement the development and wider application of AI in Hong Kong.
Pending this review, the development and use of AI are not unregulated. Hong Kong retains a flexible regulatory approach whereby existing laws remain applicable, supplemented by relevant guidelines. Any collection and use of personal data to create deepfakes is subject to the requirements of the Personal Data (Privacy) Ordinance. Specifically, the use of personal data to create and/or disclose deepfake materials may contravene the use limitation principle of the privacy law if it goes beyond the original purpose of data collection. The data protection principle governing the collection of personal data may also be contravened if personal data are collected unlawfully or unfairly. More seriously, the creation and/or disclosure of malicious deepfake materials may constitute doxxing.
Any data breach caused by unauthorised or accidental access to or processing, erasure, loss or use of data by an agentic AI may also contravene the data protection principle regarding data security, thereby breaching the privacy law, not to mention any unwarranted collection or use of the personal data of third parties without their consent.
It is of crucial, therefore, for all stakeholders, including AI developers, service providers and users, to be aware of the threats posed by the new technologies to humans’ fundamental rights. When using AI content-generation systems, for instance, my Office’s guidelines recommend that users label or watermark the output as AI-generated to avoid confusion or misunderstanding.
In particular, to avoid data leakage or cyberattacks, users should download only the latest official version of any agentic AI, grant minimum access right to the tool, adopt adequate measures to ensure system security and data security, and continuously assess the risks involved. Users should be alert, for example, to any high-risk prompts or automatic processing that might wipe out all user data (including emails).
In the race to tap into AI’s huge potential, we should proactively align with the 15th Five-Year Plan, uphold the principle of ensuring both development and security, and effectively prevent and resolve all kinds of risks. In particular, the development and deployment of AI systems should from the outset, be guided by the principles of protecting personal data privacy, privacy-by-design and privacy-by-default, among others, to prevent infringement on people’s data privacy and minimise the privacy risks involved.
As recent events have demonstrated the vulnerability of users, especially minors, in the rapidly evolving age of AI, as well as the tangible and far-reaching harms of AI’s abusive or malicious use, organisations developing and deploying AI must not sacrifice privacy and security for speed-to-market or novel functionalities. All stakeholders in the ecosystem, including AI developers, service providers and users, have unavoidable responsibilities to co-create a beneficial, secure and fair digital environment for our future generations.