Using AI to manage personal data

Using AI to manage personal data


POINT OF VIEW

AI has not only replaced the manual work of data processing but has surpassed the abilities of humans. They can also operate 24/7 and perform tasks that are integral to functions that require no down time.

Warren B. Chik

Associate Professor of Law


In brief

  • Artificial Intelligence (AI) is being used to strengthen personal data rights and interests, improve the accuracy of personal information, and make the collection and flow of personal data more secure. With AI, businesses can also comply with data protection laws more cost-effectively, while the authorities can better monitor compliance and trends. But there are also risks as AI can be used by criminals regardless of data protection rights.
  • Associate Professor Chik argued in his paper that organisations should consider the future role of AI in the management and protection of personal data, and how the technology can be featured more prominently in the next revision of the Personal Data Protection Act.
  • Regulating the use of AI in managing and protecting data has become an increasingly urgent issue as data has become the most important driver for modern economic change and development. The public sector should use AI to regulate and ensure compliance with personal data legislation.

Consumers and businesses have benefited from the development of artificial intelligence (AI) technology in recent years.

In the area of data protection, for instance, AI is being used to strengthen personal data rights and interests, improve the accuracy of personal information, and make the collection and flow of personal data more secure.

Using AI, businesses can also comply with data protection laws more cost-effectively, while the authorities can better monitor compliance and trends.

What insights come to mind?

What insights come to mind?

Click to respond and see what others think too

What makes you skeptical?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you curious?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you optimistic?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you on the fence?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

Story successfully submitted.

Story successfully submitted.

Thank you for your story. We'll be consolidating all stories to kickstart a discussion portal in our next release. Subscribe to get updates on its launch.

I consent to SMU collecting, using and disclosing my personal data to provide information relating to XXX offered by SMU that I am signing up for/that I have indicated my interest in.

I can find out about my rights and choices and how my personal data is used and disclosed here.

However, there are also risks – AI can be used by criminals regardless of data protection rights; an AI system can go “rogue” if there is insufficient human oversight. As such, experts have been calling for a strong regulatory regime and protocols for adequate human supervision to minimise these adverse outcomes.

This debate is timely for Singapore as its data protection regime, the Personal Data Protection Act (PDPA), is likely to undergo major changes in its next revision after an extensive period of review. The Act has been in force for the past five years.

In a recent paper, Associate Professor Warren Chik from the Singapore Management University (SMU) School of Law argues that organisations should consider the future role of AI in the management and protection of personal data, and how the technology can feature more prominently in ‘PDPA 2.0’.

“The PDPA does not have a specific provision on AI, but judging from the public consultation papers that have emerged from the PDPC (Personal Data Protection Commission) in recent years, we can expect AI to be a part of the conversation, and to be included, in the proposed provisions relating to data portability, the accountability measures and privacy by design,” he said.

“Certainly, AI will increasingly be used to comply with, and to regulate personal data protection principles as a matter of efficiency since data protection obligations, especially in bigger organisations, can be intricate and complex.”

Growing urgency

Regulating the use of AI in managing and protecting data has become an increasingly urgent issue as data has become the most important driver for modern economic change and development.

At the same time, AI has become an integral tool for the management and processing of data, including personal data, as it provides greater accuracy and capability.

“AI has not only replaced the manual work of data processing but has surpassed the abilities of humans. They can also operate 24/7 and perform tasks that are integral to functions that require no down time,” said Assoc Prof Chik.

One recent example of AI’s increasing importance in managing data is its use in collecting and collating location and movement data for contact tracing purposes related to the current Covid-19 outbreak.

“Scholars are currently studying how AI, which can function optimally and with a high level of accuracy, can work quickly in such circumstances where time is of the essence. AI may also be used in the future for the delivery of medical and other services remotely, which obviates human contact in times like these where it must be minimised.”

Striking a balance

In light of the changing environment, AI should be harnessed by the public sector to regulate and ensure compliance with personal data regulations, said Assoc Prof Chik.

For example, AI can be used to detect breach of privacy by organisations that fall under the purview of the PDPA.

However, the authorities will have to decide to what extent AI can be left to run independently and when there should be human oversight or, human intervention to reverse the ‘decision-making’ of an AI.

In a recent case in Singapore, a programmer’s knowledge and responsibility for “decisions” and actions made by an AI on an automated cryptocurrency trading platform emerged as an issue to determine if a “mistake” had been made in a trade due to human oversight in updating the platform’s critical operating system.

Reflecting the complex legal issues that can arise in the use of AI and how it can affect existing legal principles, the Court of Appeal determined in a split decision that the contract was valid and enforceable.

Said Assoc Prof Chik: “Regulators must strike a balance between the policy interest of technological  innovation and data protection.”

Originally published at https://news.smu.edu.sg/news/2020/04/15/using-ai-manage-personal-data