Artificial Intelligence under the Spotlight
Artificial intelligence (“AI”) has remained the focus of the world in the first few months of 2025. In February, participants from over 100 countries, including government leaders and representatives from international organisations, gathered in Paris for the landmark AI Action Summit, following the 2023 AI Safety Summit in the United Kingdom and the 2024 AI Seoul Summit in South Korea. The recent AI Action Summit concluded with 64 jurisdictions, including China and the European Union, endorsing the “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet”, which affirms the priority of ensuring that AI is open, inclusive, transparent, ethical, safe, secure and trustworthy.
During the Country’s “two sessions” held in Beijing in March, the need to continuously advance the “AI Plus” Initiative to unleash the creativity of the digital economy was specifically highlighted. The “AI Plus” Initiative, first unveiled in the 2024 Central Government Work Report, aims to facilitate the integration of AI technology with professional knowledge and technical resources, propelling the expansion of the digital economy and spearheading the transformation and modernisation of manufacturing sectors.
While the Country is accelerating the development of AI, it also places equal emphasis on AI security and strives to foster global cooperation on this front. In April this year, when presiding over a group study session of the Political Bureau, President Xi Jinping, the General Secretary of the Communist Party of China Central Committee, emphasised the need for extensive development of international cooperation on AI, with China contributing to bridging the global AI divide.
These remarks reiterate the Country’s ongoing commitment to international collaboration on AI security. As early as October 2023, the Country introduced the “Global AI Governance Initiative” (“AI Governance Initiative”), which proposes principles such as “people-centred, AI for good” while emphasising the equal importance of the development and security of AI. Indeed, AI security is one of the important aspects of national security. To implement the AI Governance Initiative, the Country issued version 1.0 of the “AI Safety Governance Framework” in September 2024, setting out a series of comprehensive governance measures and technical measures to guard against various risks posed by AI, including privacy risks.
Turning to Hong Kong, AI remains the strategic focus of the Government. The “Hong Kong Innovation and Technology Development Blueprint”, promulgated by the Government in late 2022, aims to focus on the development of the AI industry. Building on this momentum, the Government reiterated its dedication to advancing AI development in the latest Budget announced in February, recognising AI as a core driver of new quality productive forces. To that end, the Government has rolled out various initiatives, including a notable allocation of HK$1 billion for the establishment of the Hong Kong AI Research and Development Institute, a move aimed at supporting our city’s innovative research and development as well as industrial application of AI.
A Tool or a Threat
Beyond policy initiatives, AI has already transformed workplaces in Hong Kong in unimaginable ways, with both employers and employees harnessing the benefits of AI to automate day-to-day tasks, streamline operational procedures and enhance productivity and services.
Behind the extraordinary capabilities of AI, however, lie risks that may not be easily noticeable. For instance, although generative AI (“Gen AI”) chatbots are capable of carrying out a wide array of tasks, such as consolidating information, generating graphics and videos, and even assisting in medical diagnoses, they may store some, if not all, of the prompts inputted and the files shared by users for the purpose of training the underlying models. With a vast amount of data stored in the systems, which may include personal data and confidential information, these chatbots may be seen as treasure troves of sensitive data and become attractive targets for hackers and cyber criminals. Hence, the very power that enables AI to process and analyse a large quantity of data is also a potential threat to personal data privacy.
In fact, the risks associated with the use of Gen AI are not mere speculations. Over the past few years, we have seen reports of various “AI incidents”, one of which was the input of sensitive internal source code by employees of a leading Korean tech giant into a Gen AI chatbot, which subsequently led to the leakage of the sensitive data. Incidents such as this sent a clear message about the potential risks of AI tools. It is imperative, therefore, for organisations to pay heed to the risks of AI and implement robust and effective measures to ensure AI security.
Deep Dive into the Gen AI Checklist
Indeed, as revealed in a survey conducted by my Office (“PCPD”) in 2024 on AI security, nearly 70% of the surveyed enterprises recognised that using AI in their operations posed significant privacy risks, highlighting the pressing need to address these concerns. To mitigate such risks, organisations should, among other things, provide clear and comprehensive guidance to their employees to assist them in using AI tools in a safe manner, without jeopardising, in particular, personal data privacy.
To facilitate the safe and healthy development of AI in Hong Kong and to help organisations develop internal policies or guidelines on the use of Gen AI by employees at work while complying with the requirements of the Personal Data (Privacy) Ordinance (“PDPO”), the PCPD published the “Checklist on Guidelines for the Use of Generative AI by Employees” (“Guidelines”) in March this year.
The Guidelines recommend various aspects for organisations to consider when developing their internal AI policies or guidelines, including the key elements set out below.
Scope of Permissible Use of Gen AI
Firstly, the AI policy or guideline should specify the Gen AI tools that are permitted within the organisation, which may include publicly available tools and/or internally developed tools. In addition, organisations should clearly define the permissible purposes for using these tools to avoid ambiguity – for example, whether employees may use such tools for drafting documents, summarising information or creating textual, audio and/or visual content, such as photos or videos for promoting the organisation’s products or services.
To clearly delineate accountability, it is also important for organisations to specify whether such policies are applicable to the entire organisation or only to specific divisions or employees.
Protection of Personal Data Privacy
It is recommended that organisations provide clear instructions in relation to both the “inputs” and “outputs” of the Gen AI tools. Regarding the permissible inputs to the tools, organisations should specify the types and amounts of information that can be entered. For instance, clear instructions should be provided on whether employees may share personal or copyrighted data with the tools.
Regarding the outputs generated by the Gen AI tools, the AI policy should outline the permissible purposes for using the outputted information (including personal data), and whether, when and how such personal data should be anonymised before further use. Additionally, guidance should be provided in respect of the permissible storage of the output information and the applicable data retention policy. Organisations should also ensure that their AI policies are aligned with other relevant internal policies, including those on personal data handling and information security.
Lawful and Ethical Use and Prevention of Bias
To ensure lawful and ethical use of the Gen AI tools, it should be specified in the policy that employees shall not use such tools for unlawful or harmful activities.
The Guidelines also recommend that the AI policy or guideline should emphasise that employees acting as human reviewers are responsible for verifying the accuracy of AI-generated outputs through practices such as proofreading and fact-checking, and for correcting and reporting biased or discriminatory AI-generated outputs. To enhance transparency and avoid misleading customers or other stakeholders, organisations should also provide instructions on when and how to watermark or label AI-generated outputs.
Data Security
With the aim of safeguarding data security, the AI policy should specify the types of devices on which employees are permitted to access Gen AI tools and the categories of employees who are permitted to use these tools. Employees should be required to use robust user credentials and maintain stringent security settings in Gen AI tools.
In the event of any incident involving AI, such as a data breach, an unauthorised input of personal data into Gen AI tools, an abnormal output result and/or an output result that may breach the law, employees should be required to report such incidents according to the organisation’s own AI Incident Response Plan.
Violations of Policies or Guidelines
Lastly, we recommend organisations to specify the possible consequences of employees’ violations of the policies or guidelines on the use of AI.
Practical Tips
The Guidelines also provide practical tips to organisations on supporting employees in using Gen AI tools. For example, organisations are encouraged to regularly communicate the AI policies or guidelines to employees and keep employees informed of any updates in a timely manner to enhance transparency. Organisations can also establish channels for employees to provide feedback that can help the organisations identify areas for improvement. Moreover, it is recommended that organisations provide training and resources to employees so that they may use Gen AI tools effectively and responsibly. Additionally, it is desirable for designated support teams be set up to assist employees in using Gen AI tools in their work. Other than providing technical assistance, the support teams should be able to address any concerns that employees have in relation to the AI policies or guidelines.
For further details, you may refer to the Guidelines accessible via the link or QR Code below:
https://www.pcpd.org.hk/english/resources_centre/publications/files/guidelines_ai_employees.pdf.
The AI Model Personal Data Protection Framework
In adopting the Guidelines to formulate their AI policies, organisations may also wish to refer to the “Artificial Intelligence: Model Personal Data Protection Framework” (“Model Framework”) published by my Office in June 2024 which provides internationally well-recognised and practical recommendations and best practices to assist organisations to procure, implement and use AI in compliance with the relevant requirements of the PDPO.
Specifically, the Model Framework, which is based on general business processes, covers recommended measures in the areas of (1) AI strategy and governance; (2) risk assessment and human oversight; (3) customisation of AI models and implementation and management of AI systems; and (4) communication and engagement with stakeholders.
For further details, you may refer to the Model Framework accessible via the link or QR Code below:
https://www.pcpd.org.hk/english/resources_centre/publications/files/ai_protection_framework.pdf.
Conclusion
Looking ahead, as AI is transforming every aspect of our lives and the workplace at a pace close to exponential, the importance of ensuring AI security cannot be overstated. It is high time for organisations to devise their own AI policies or guidelines so that their employees can use the new technology effectively, responsibly and safely, and become the true master of the technology, not the other way round.