We are living in the midst of the Fourth Industrial Revolution, which is characterised by breakthroughs and innovation in information and communications technology, including robotics, artificial intelligence, machine learning, the Internet of Things, autonomous vehicles, etc. The advancement in technology has completely transformed our lives – activities that were hitherto conducted in the physical world have now moved to the digital world. Browsing of websites and leaving comments on a social media platform would have our data or footprint left digitally. Data is collected and generated on a massive scale and at an unprecedented speed. The massive amount of digital data is conducive to the development of artificial intelligence (AI).
In contemporary discourse, AI mainly refers to a new class of algorithms that is configured on the basis of machine learning techniques. The instructions to be carried out are no longer programmed by a human developer only, but are also generated by the machine itself, which “learns” from the inputted data. These machine learning algorithms can perform tasks that previous conventional computer programmes were incapable of doing (for example, detecting a particular object from a vast image dataset). More challenging is the concern about “artificial general intelligence”. These are “notional future AI systems that exhibit apparently intelligent behaviour at least as advanced as a person across the full range of cognitive tasks” according to the report titled “Preparing for the Future of Artificial Intelligence” published by the Executive Office of the President of the United States and National Science and Technology Council Committee on Technology in October 2016.
AI and machine learning have facilitated innovative applications such as language processing, translation, image recognition, virtual assistants, profiling, credit scoring and automated decisions. AI technology can boost productivity, transform businesses and enhance the standard of living of the community. However, the commercial development and adoption of AI raise a variety of ethical and privacy issues. The extensive and ubiquitous collection of personal data, together with unanticipated use and transfer of the data, has challenged data privacy frameworks around the world.
Most data protection laws reflect long-established principles. The 1980 OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data articulate eight basic principles of data protection: collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation and accountability. Some consider that AI is in tension with most of these data protection principles.
For instance, a significant part of our current data privacy regimes is based on notification and consent. Yet, in the era of AI, individuals may not be aware that their personal data is being collected or shared, not to mention giving consent to the collection.
Sophisticated data analytics and profiling techniques are capable of exposing a person’s most private and intimate space. Depending on the source and quality of the data, the results of data analytics may be highly biased or discriminatory. If decisions are then made based on these results, they will likely lead to significant harm or discrimination against the individuals concerned. AI can therefore create bias, discrimination and even exclusion.
AI requires a vast amount of data to help train and advance its algorithms. Yet the amassing of data poses significant risks to our personal and collective freedoms. By contrast, data protection laws generally advocate minimising the collection and retention of personal data. The advent of AI therefore dictates a review of the traditional approach in personal data protection.
The Personal Data (Privacy) Ordinance (Cap. 486) (Ordinance) was modelled on, inter alia, the 1980 OECD Guidelines, and intended to be principle-based and technology-neutral. It has the ability to adapt and respond to the changing privacy landscape to some extent. However, the advancement of technology and the proliferation of advanced data-processing activities are stretching the limits of fundamental data protection principles such as “notice and consent”, “use limitation” and “transparency”.
Under the Ordinance, data users are not permitted to use personal data for a new purpose unless with the consent of the individuals (the “use limitation” principle). Prescribed consent must be express, voluntary and informed. Data subjects must be adequately informed of the new purpose, the potential data transferees, and the associated risks.
Given the inherent unpredictability of AI applications, it appears difficult to provide specific information to the data subjects before their personal data is collected, processed or used. Furthermore, it is important to bear in mind that AI systems and decision-making using AI should be human-centric – it should put the identity of consumers (i.e. the data subjects) at the front and centre of any AI project.
In the wake of these challenges to privacy protection, the Privacy Commissioner for Personal Data has recently advocated data ethics and accountability as a means to complement the implementation and enforcement of the Ordinance, in balancing the interests of both data users and data subjects in the era of digital economy. More can be found at the report titled “Ethical Accountability Framework for Hong Kong, China”.