Skip to content

Newspaper Column

PCPD in Media

Tech Firms Need to Develop AI Ethically and Responsibly -- Privacy Commissioner's article in South China Morning Post (April 2023)

Artificial intelligence, and in particular, the rise of generative AI-powered chatbots such as Open AI’s ChatGPT, Google’s Bard, Microsoft’s Bing chat and Baidu’s Ernie Bot, has been making waves.
 
While many embrace these technological breakthroughs as a blessing, the privacy and ethical implications demand closer scrutiny. An open letter calling for a six-month pause on the training of AI systems more powerful than GPT-4, to allow for the development and implementation of shared safety protocols, has garnered more than 26,000 signatories.  
 
According to McKinsey & Co, generative AI describes algorithms “that can be used to create new content, including audio, code, images, text, simulations, and videos”. Unlike earlier forms of AI that focus on automation or primarily conduct decision-making by analysing big data, generative AI has a seemingly magical ability to respond to almost any request in a split second, often giving a creative, new and convincingly human-like response.
 
The potential of generative AI to transform many industries by increasing efficiency and uncovering novel insights is immense. Several tech giants have started to explore how to implement generative AI models in their productivity software.  
 
In particular, general knowledge AI chatbots based on large language models like ChatGPT, can help draft documents, create personalised content, respond to employee inquiries, and more.  One of the world’s biggest investment banks has reportedly “fine-tune trained” a generative AI model to answer questions on its investment recommendations.
 
But looking at generative AI through rose-coloured spectacles reveals just one side of the story. The use of AI chatbots also generates privacy concerns.
 
The operation of generative AI depends heavily on deep learning technology which involves analysis of a massive amount of unstructured data, often carried out with little supervision. Training data is reportedly collected and copied from the Internet and may include anything from sensitive personal information to trivia.  
 
Many AI developers are keen to keep their data sets proprietary and tend to disclose as little detail as possible on their data collection processes. So there is also a risk that personal data may not be collected fairly or on an informed basis.
 
And, since user conversations could be used as new training data for AI models, users may be inadvertently feeding sensitive information to different AI systems. This makes the personal data they input susceptible to misuse and raises questions about whether the “limitation of use” principle is adhered to.
 
To further complicate the picture, the ethical risks of inaccurate information, discriminatory content or biased output in the use of generative AI cannot be overlooked.  To quote the well-recognised Asilomar AI Principles: “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” 
 
Is this the juncture for such planning and management and, more importantly, control? Indeed, it is noteworthy that the open letter does not call for a pause on AI development in general, but “merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities”. 
 
While the open letter advocates a long pause on advanced AI starting this summer, tech companies have a duty to review and critically assess the implications of their AI systems on data privacy and ethics, and ensure that laws and guidelines are adhered to.
 
In this context, my office issued the “Guidance on the Ethical Development and Use of Artificial Intelligence” in August 2021 to help organisations develop and use AI systems in a privacy-friendly, ethical and responsible manner. The guidance recommends internationally recognised ethical AI principles covering accountability, human oversight, transparency and interpretability, fairness, data privacy, and beneficial AI, as well as reliability, robustness and security. These are all principles that organisations should observe during the development and use of generative AI to mitigate risks.
 
AI developers are often in the best position to adopt a privacy-by-design approach, which can mitigate the privacy risks for users. For instance, the adoption of techniques such as anonymisation can ensure that all identifiers of data subjects have been removed from the training data. A fair and transparent data collection policy would foster trust between the AI systems and their users.
 
The government’s proposal to develop an AI supercomputing centre in Hong Kong as a major strategic technological infrastructure will augment local research and development capabilities, facilitate the development of a healthy industry ecosystem and accelerate the development of the local AI industry.  An ecosystem that values and respects fundamental human values, including the privacy of personal data, would benefit society.
 
Generative AI presents an exciting yet complicated landscape with opportunities waiting to unfold. Its rapid ubiquity should be a signal for all stakeholders, in particular tech companies and AI developers, to join hands in creating a safe and healthy ecosystem to ensure this technology is used for our good.