Skip to content

Newspaper Column

PCPD in Media

Companies must take the initiative on safe AI practices Privacy Commissioner's article in South China Morning Post  (May 2025)

As artificial intelligence (AI) develops rapidly, an increasing number of organisations are leveraging this technology to streamline operations, improve quality and enhance competitiveness.

However, AI poses security risks, including personal data privacy risks, that cannot be ignored. For instance, organisations developing or using AI systems often collect, use and process personal data, posing privacy risks such as excessive collection, unauthorised use and breaches of personal data.

The importance of AI security has become a common theme in international declarations and resolutions adopted in recent years. In 2023, 28 countries, including China and the United States, signed the Bletchley Declaration at the AI Safety Summit in the UK. The declaration stated that misuse of advanced AI models could lead to catastrophic harm and emphasised the urgent need to address these risks.

In 2024, the United Nations General Assembly adopted an international resolution on AI, promoting “safe, secure and trustworthy” AI systems.

Concerning technological and industrial innovation, China has emphasised both development and security. In 2023, Beijing launched the Global AI Governance Initiative, proposing principles such as taking a people-centred approach and developing AI for good. More recently, this April, when presiding over a group study session of the Political Bureau, President Xi Jinping remarked that while AI presents unprecedented development opportunities, it also brings risks and challenges not seen before. These risks and challenges are as unprecedented as they are real.

Around two years ago, Samsung banned its employees from using ChatGPT amid concerns about leakage of sensitive internal information on such platforms. The crackdown was reportedly prompted by an engineer’s accidental leak of sensitive internal source code.

At around the same time, ChatGPT reported a major data leakage incident. Sensitive data, including the headings of users’ conversations with the chatbot, users’ names, email addresses and even parts of their credit card numbers, were leaked.

As AI-powered chatbots increase in maturity and popularity, they find their way into workplaces, with employees using them to prepare minutes, summarise presentations, produce promotional materials or even create or modify internal source codes.

However, organisations must realise that such use of AI, while helping automate workflows and boost productivity, also pose risks such as the leakage of confidential information or customers’ personal data.

Organisations should note that, depending on the algorithm of the AI tool and the accessibility of the server, uploaded data may enter a large open database and be used to train the underlying model. The data may also be regurgitated in responses to prompts from a business rival or an innocent customer.

Given the privacy and security risks posed by AI, my office conducted compliance checks on 60 organisations across various sectors in February and published a report in early May this year.

The compliance checks were carried out to understand whether these organisations complied with the relevant requirements of the Personal Data (Privacy) Ordinance in the collection, use and processing of personal data during the use of AI, and whether proper governance was in place.

Based on the findings, 80 per cent of organisations examined used AI in their day-to-day operations. Among these organisations, half of them collected and/or used personal data through AI systems. However, not all of them have AI-related policies. Only about 63 per cent of the organisations that collected and/or used personal data through AI systems had such policies.

The importance of having an AI-related policy cannot be overstated. Organisations are recommended to formulate AI guidelines to strike a balance between business efficacy and data protection. To this end, my office published the “Checklist on Guidelines for the Use of Generative AI by Employees” in March to help organisations develop the right policies while complying with the city’s ordinance on AI.

The guidelines, presented in a checklist format, recommend that an organisation’s internal AI policy includes information on the permissible use of generative AI, protection of personal data privacy, lawful and ethical use, bias prevention, data security and the consequences of violations. Organisations are responsible for ensuring the development or use of AI is not only beneficial for business but also lawful and safe. While leveraging AI to sharpen their competitive edge, they should not allow the tech to run wild or become their new master.