Skip to content

Newspaper Column

PCPD in Media

Hong Kong: How Hong Kong PCPD’s Latest Guidelines Strengthen AI Security in the Workplace

Growing Emphasis on AI Security Around the Globe

While the popularity of artificial intelligence (“AI”) technology and the versatility of its application have grown by leaps and bounds in recent years, the security and personal data privacy risks associated with the development and use of this technology continue to spark discussions worldwide. Apart from the fast-developing powers and capabilities of AI, AI security has also emerged as a focal point on the international stage.

 
In February this year, world leaders, representatives of international organisations, civil society, the private sector, the academic and research communities from around the world gathered in Paris to attend the AI Action Summit (“Summit”), with a view to collectively establishing scientific foundations, solutions and standards for more sustainable AI working for collective progress and in the public interest. Towards the end of the Summit, more than 60 countries and international organisations signed the “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet” which, among others, affirmed the priority of ensuring that AI is human centric, ethical, safe, secure and trustworthy, and emphasised that harnessing the benefits of AI technologies to support economies and societies essentially depends on advancing trust and safety.
 
Subsequently, at the opening ceremony of the 2025 World AI Conference and High-Level Meeting on Global AI Governance held in Shanghai in July, Chinese Premier Li Qiang called for the international community to place greater emphasis on the joint governance of AI and underscored the need to develop consensus on how to strike a balance between development and security in view of the risks and challenges posed by AI. To advance the objective of fostering global collaboration on AI governance, the Global AI Governance Action Plan was also unveiled in the 2025 World AI Conference, with AI security being a recurring theme. It calls on all parties, including governments, international organisations and enterprises to advance global AI development and governance based on the objectives of, among others, promoting AI for good and in service of humanity, ensuring safety and controllability and upholding fairness and inclusiveness.

Multi-pronged Approach to Fostering AI Security in Hong Kong

Turning to Hong Kong, while there is no overarching legislation on AI at present, like many jurisdictions around the globe, data protection laws still apply in the context of AI insofar as the collection, processing and use of personal data is concerned. My Office (the Office of the Privacy Commissioner for Personal Data, Hong Kong, China) has issued a series of guidance materials on AI since 2021, including the “Guidance on the Ethical Development and Use of Artificial Intelligence” which was published in August 2021 and the “Artificial Intelligence: Model Personal Data Protection Framework” (the “Model Framework”) which was published in June 2024. These guidelines were published with a view to providing internationally well-recognised and practicable recommendations and best practices to assist organisations in addressing the personal data privacy risks posed by AI and ensuring that the development, procurement and use of this emerging technology complies with the relevant requirements of the Personal Data (Privacy) Ordinance (the “PDPO”).
 
To underscore the importance of safeguarding AI security and personal data privacy and promote these guidance materials, my Office has organised various seminars and training sessions tailored for industry players and the general public. We have also set up a dedicated “AI Security” thematic webpage, which serves as a one-stop platform for AI-related resources, and an “AI Security” hotline for organisations to make enquiries.
 
I am pleased to see that our efforts have yielded tangible and positive results, as evidenced by the results of the compliance checks conducted by my Office earlier this year, where all of the 60 local organisations that we examined were found to be in compliance with the PDPO in the collection, use and processing of personal data during their use of AI. In particular, among the organisations that collected and/or used personal data through AI systems, more than 60% of them made reference to the aforementioned guidelines on AI published by my Office, and nearly one-third planned to do so.
 
It is especially encouraging to note that among the organisations that collected and/or used personal data through AI systems, around 80% of them had already established AI governance structures, such as setting up AI governance committees and/or appointing designated personnel to oversee the use of AI systems, and over 60% of them had provided training to employees on AI-related privacy risks, both of which are the key recommendations outlined in the Model Framework. This shows that many organisations are not just talking the talk, but also walking the walk.

Building AI Security from Within

In Hong Kong, generative AI (“Gen AI”) stands out as one of the most widely adopted AI technologies for organisations and their employees. However, a recent local study published by the Hong Kong Federation of Trade Unions (“HKFTU”) showed that nearly 70% of employees did not regularly or proactively disclose their use of Gen AI to their employers, and more than 40% expressed some concern about the liability of exposing or mishandling personal data or confidential information when using Gen AI.  
 
Furthermore, although some encouragement can be drawn from a local survey on AI security conducted by the PCPD in 2024, which showed that nearly 70% of local enterprises recognised significant privacy risks associated with AI use, only 28% of the surveyed enterprises had an AI security policy in place, with another 27% planning to devise one in the future.
 
Given these findings, the question arises of how an organisation can ensure that its employees use AI safely and responsibly, while striking a balance between leveraging the benefits of the new technology while safeguarding the interests of both the organisation and its employees to create a win-win situation.
 
To help organisations enhance AI security, my Office issued the Checklist on Guidelines for the Use of Generative AI by Employees (the “Gen AI Guidelines”) in March this year, with an aim to assisting organisations in developing an internal policy or guideline on the use of Gen AI by employees at work while complying with the requirements of the PDPO.
 
Amongst the many components of building a secure AI environment within an organisation, having a clear and comprehensive policy which sets out the “dos” and “don’ts” is a crucial first step. Therefore, the Gen AI Guidelines, presented in a checklist format, outlines the following key aspects that are recommended to be included in the organisation’s internal AI policy or guideline:
 
  1. Scope of Permissible Use of Gen AI
Firstly, the AI policy or guideline should specify the Gen AI tools that are permitted within the organisation, which may include publicly available tools and/or internally developed tools. In addition, organisations should clearly define the permissible purposes for using these tools to avoid ambiguity – for example, whether employees may use such tools for drafting documents, summarising information or creating textual, audio and/or visual content, such as photos or videos for promoting the organisation’s products or services.
 
To clearly delineate accountability, it is also important for organisations to specify whether such policies are applicable to the entire organisation or only to specific divisions or employees. 
 
  1. Protection of Personal Data Privacy
To mitigate the risks of misuse or unauthorised disclosure of personal data, it is recommended that organisations provide clear instructions in relation to both the “inputs” and “outputs” of the Gen AI tools. Regarding the permissible inputs to the tools, organisations should specify the types and amounts of information that can be entered. For instance, clear instructions should be provided on whether employees may share personal or copyrighted data with the tools.
 
Regarding the outputs generated by the Gen AI tools, the AI policy should outline the permissible purposes for using the outputted information (including personal data), and whether, when and how such personal data should be anonymised before further use. Additionally, guidance should be provided in respect of the permissible storage of the output information and the applicable data retention policy. Organisations should also ensure that their AI policies are aligned with other relevant internal policies, including those on personal data handling and information security.
 
  1. Lawful and Ethical Use and Prevention of Bias
To ensure lawful and ethical use of the Gen AI tools, it should be specified in the AI policy that employees shall not use such tools for unlawful or harmful activities.
 
The Gen AI Guidelines also recommends that the AI policy or guideline should emphasise that employees acting as human reviewers are responsible for verifying that AI-generated outputs are accurate and up-to-date through practices such as proofreading and fact-checking, and for correcting and reporting biased or discriminatory AI-generated outputs.
 
To enhance transparency and avoid misleading customers or other stakeholders, organisations should also provide instructions on when and how to watermark or label AI-generated outputs.
 
  1. Data Security
With the aim of safeguarding data security, the AI policy should specify the types of devices on which employees are permitted to access Gen AI tools and the categories of employees who are permitted to use these tools, such as those with operation needs and have received relevant training. Employees should be required to use robust user credentials and maintain stringent security settings in Gen AI tools. For instance, disallowing the saving functions and disabling the sharing of prompts with Gen AI tool providers can help minimise the risk of data security incidents and behavioural profiling of employees.
 
In the event of any incident involving AI, such as a data breach, an unauthorised input of personal data into Gen AI tools, an abnormal output result and/or an output result that may breach the law, employees should be required to report such incidents according to the organisation’s own AI Incident Response Plan.
 
  1. Violations of AI Policy
Lastly, the Gen AI Guidelines recommend organisations specify the possible consequences of employees’ violations of the policy or guideline on the use of AI. For recommendations on establishing a proper AI governance structure as well as related measures for the management and continuous monitoring of employees’ use of Gen AI tools, organisations are encouraged to refer to the Model Framework.
 
Practical Tips
In addition to the five key areas outlined above, the Gen AI Guidelines also provide practical tips to organisations on supporting employees in using Gen AI tools. 
 
For example, organisations are encouraged to regularly communicate the AI policy or guideline to employees and keep employees informed of any updates in a timely manner to enhance transparency. Organisations can also establish channels for employees to provide feedback that can help the organisations identify areas for improvement.
 
Moreover, it is recommended that organisations provide training and resources to employees so that they may use Gen AI tools effectively and responsibly. Additionally, it is desirable for designated support teams be set up to assist employees in using Gen AI tools in their work. Other than providing technical assistance, the support teams should be able to address any concerns that employees have in relation to the AI policy or guideline.

Be the True Master of AI
As we stand at the crossroads of innovation and security, it is becoming clear that no matter how technology transforms, it must remain a tool which can be harnessed and controlled by human. Although AI presents abundant opportunities, it must be used with great care. Organisations, as data users, have responsibility to ensure that the new technology is not only used for the benefit of the organisation itself but is used safely and responsibly for the benefit of mankind. Establishing clear internal policies or guidelines that guide the safe, secure and responsible use of AI is the first indispensable step towards that direction.

Ada Chung Lai-Ling, Privacy Commissioner for Personal Data, Hong Kong, China