Skip to content

Newspaper Column

PCPD in Media

Safeguarding Personal Data Privacy in AI Era: Intersection between Global Developments and Local Efforts – Privacy Commissioner’s article contribution at Hong Kong Lawyer (December 2025)

Global trend to anchor AI innovation in security
Nations around the world are competing to harness artificial intelligence (AI) to drive economic growth, while balancing this with the need to use AI safely. AI brings not just transformative potential but also significant privacy and security risks. Recognising these risks, various jurisdictions have passed laws to regulate AI specifically. Apart from the European Union’s AI Act, which is widely regarded as the world’s first comprehensive AI regulation, more recently, Italy and South Korea have joined the wave in introducing legislations targeting AI.

Echoing this trend, the Chinese Mainland has been progressively building a regulatory framework for AI. The Country recognises AI security as one of the twenty major fields of national security and places a dual emphasis on development and security. Introduced in 2024, the “AI Plus” initiative promotes the integration of AI with various industries and sectors to unlock new quality productivity and drive economic development and societal well-being. In parallel, various regulations and measures have been enacted to regulate specific types of AI use, including recommendation algorithms, deep synthesis and generative AI. During the recent APEC meeting in Gyeongju, South Korea, President Xi Jinping said countries should promote the “sound and orderly development of AI while ensuring that it is beneficial, safe and fair”.
 
Hong Kong’s regulatory landscape
In alignment with the national strategy to accelerate AI development, the Chief Executive’s Policy Address this year aptly advocates the wide-scale application and development of AI across different sectors to boost overall efficiency. Another welcome move is the tasking of the Department of Justice with co-ordinating the responsible bureaux to review the relevant law, complementing the development and need for wider application of AI.

Notwithstanding the absence of specific AI legislation in Hong Kong, existing laws, including the Personal Data (Privacy) Ordinance (the “PDPO”), remain applicable to the development and use of AI. In this regard, my Office has issued multiple guidelines since 2021 to assist organisations to mitigate the privacy risks posed by AI while complying with PDPO requirements, including the “Guidance on the Ethical Development and Use of Artificial Intelligence” (2021), “Artificial Intelligence: Model Personal Data Protection Framework” (2024) and “Checklist on Guidelines for the Use of Generative AI by Employees” (2025).
 
International efforts to advocate privacy-protective AI
From the perspective of protecting personal data privacy, privacy-by-design and privacy-by-default are well-recognised fundamental principles in the adoption of AI. Accordingly, the development of AI systems should, from the outset, be guided by robust data and AI governance frameworks via a risk-based approach to uphold universal ethical principles such as accountability, transparency, fairness and security.

Privacy or data protection authorities worldwide are increasingly advocating privacy-protective AI. In a landmark move, during the 47th Global Privacy Assembly Conference (“the Conference”) in September this year, 20 authorities from different jurisdictions, including my Office, signed the “Joint Statement on Building Trustworthy Data Governance Frameworks to Encourage Development of Innovative and Privacy-protecting AI”, advocating, among others, the incorporation of data protection principles in the approach to AI systems and the establishment of robust data governance.

At the same Conference, attended by over 130 privacy or data protection authorities from around the globe, my Office co-sponsored two resolutions on AI: the “Resolution on Meaningful Human Oversight of Decisions Involving AI Systems” and “Resolution on the Collection, Use and Disclosure of Personal Data to Pre-Train, Train and Fine-Tune AI Models”. The first focuses on the key regulatory expectations regarding the human oversight of AI-assisted decision-making, whereas the latter recommends best practices for addressing the unique privacy challenges arising from AI training.

Even before the generative AI boom, my Office, as the co-chair of the GPA’s “International Enforcement Cooperative Working Group”, initiated joint actions with other privacy or data protection authorities worldwide to address the growing concerns over massive-scale data scraping from social media platforms or websites that host publicly accessible information. Well supported by other authorities, the initiative concluded with the issuance of two global joint statements in 2023 and 2024, reminding major social media platforms and other public websites of their responsibilities to protect users’ personal data from unlawful data scraping and advising them of organisational and technical measures to enhance the data security of their services.

In parallel, in our role as the co-chair of the GPA’s “Ethics and Data Protection in Artificial Intelligence Working Group”, we have tried to contribute to, and steer, international discussions among privacy or data protection authorities on AI security – specifically through proposing and adopting international resolutions, issuing international declarations or guidelines, and undertaking joint research into AI-related privacy issues. 

As the race to adopt AI accelerates globally, so do expectations for responsible and safe innovation. Organisations, especially those operating across borders, should keep abreast of the evolving laws and regulatory changes in different jurisdictions so they can leverage the new technology in a privacy-protective, safe, responsible and ethical manner.