Skip to content

Newspaper Column

PCPD in Media

PCPD’s Guidelines for Devising an Internal Gen AI Policy Creates a Win-Win Situation

Since artificial intelligence (AI) chatbots took the world by storm in 2022, human resources professionals have faced a conundrum. Although generative AI (Gen AI) can potentially enhance an organisation’s productivity, the excitement over this emerging technology is tempered by the challenges in governing its use and ensuring data security and compliance with laws such as the Personal Data (Privacy) Ordinance (the PDPO).
 
Compliance checks completed by my Office (the PCPD) in May 2025 found that 80% of the organisations examined used AI in their day-to-day operations. A recent study published by the Hong Kong Federation of Trade Unions (the HKFTU) further revealed that nearly 70% of employees did not regularly or proactively disclose their use of Gen AI to their employers, and more than 40% expressed little concern about the liability of exposing or mishandling personal data or confidential information when using Gen AI. With these trends in mind, the question arises of how an organisation can ensure that their employees use AI safely in the ever-evolving digital landscape, with a view to leveraging the benefits of the new technology while safeguarding the interests of the organisation and its employees to create a win-win situation.
 
AI Security
As data is the lifeblood of AI, it is abundantly clear that threats to personal data privacy are among the most concerning risks posed by AI. Although some encouragement can be taken from the PCPD’s 2024 survey showing that nearly 70% of enterprises recognised significant privacy risks in AI use, only 28% of these had established an AI security policy. Clearly, awareness does not equate to action, and this leaves some organisations vulnerable to AI security risks and their employees uncertain about what is permissible. 
 
The PCPD’s New Guidelines
To help organisations and members of the public address the privacy risks brought by the AI tsunami, the PCPD has published a series of guidance materials and leaflets since 2021. In particular, the “Checklist on Guidelines for the Use of Generative AI by Employees” (the Guidelines) was published in March 2025 to help organisations develop internal policies or guidelines on the use of Gen AI by employees at work (AI policy) while complying with the relevant requirements of the PDPO. The HKFTU study published in May, calls amongst other things for organisations to make reference to the Guidelines when formulating an internal AI policy.
 
The Guidelines recommend that organisations consider the following areas when developing an internal AI policy.

a) Scope of Permissible Use of Gen AI
Organisations should specify the permitted Gen AI tools and clearly define the permissible purposes for using these tools; for example, whether an employee can use these tools for drafting documents and summarising information. To delineate accountability, organisations should also specify whether the AI policy applies to the entire organisation or only to specific divisions.

b) Protection of Personal Data Privacy
The Guidelines recommend that organisations provide clear instructions on the “inputs” and “outputs” of Gen AI tools. Specifically, the permissible types and amounts of information that can be inputted into Gen AI tools should be stated, and the permissible purposes for using AI-generated outputs, the permissible means of storage of such information, and the applicable data retention policy and other relevant policies with which employees must comply should be set out.

c) Lawful and Ethical Use and Prevention of Bias
An organisation’s AI policy should specify that employees must not use Gen AI tools for unlawful or harmful activities, and that employees are responsible for verifying the accuracy of AI-generated outputs and for correcting and reporting biased or discriminatory outputs. Organisations should also provide instructions on when and how to watermark or label AI-generated outputs.

d) Data Security
To safeguard data security, the Guidelines recommend that an organisation’s AI policy should specify which categories of employees are permitted to use Gen AI tools and the types of devices on which their use is permitted. Employees should use robust user credentials and maintain stringent security settings in these tools. They should also be required to report AI incidents in accordance with the organisation’s AI incident response plan.

e) Violations of AI Policy
Lastly, organisations should specify the possible consequences of violations of the AI policy by employees. For recommendations on establishing a proper Gen AI governance structure and other relevant considerations, organisations can refer to the PCPD’s “Artificial Intelligence: Model Personal Data Protection Framework”.

Practical Tips
In addition, the Guidelines provide practical tips on supporting employees in using Gen AI tools, including (a) enhancing transparency by regularly informing employees of the AI policy and any updates, (b) providing training and resources, (c) assisting employees with a designated support team, and (d) establishing a feedback mechanism for identifying areas for improvement.
 
Act Now
As digital technology continues to develop at a breakneck speed, imposing a blanket ban on AI tools is obviously not the optimal solution to the challenges that they bring. Organisations are encouraged to devise an internal AI policy or guideline to provide clear guidance to employees on the use of Gen AI at work. Having a clear AI policy or guideline can help to build trust and understanding between an organisation and its employees over the use of AI, thereby creating a win-win situation that is conducive to the success of the organisation.