×
 
 Back to all blogs

Healthcare & AI for a Rapidly Ageing Singapore

 

Sun, 05/03/2020 - 12:00

0%

Earlier this year, the SMU Centre for AI & Data Governance and SGInnovate hosted a panel discussion on the ‘Challenges of employing AI in the healthcare sector’. The panel was chaired by Miss Sunita Kannan — Data, AI Advisory and Responsible AI expert. The other panellists included Professor Dov Greenbaum (director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies, Interdisciplinary Center, Herzliya Israel), Dr Tan Jit Seng (founder and director of Lotus Eldercare and vice president of the Asia Pacific Assistive Robotics Association) and Mr Julien Willeme (legal director for Medtronic, Asia-Pacific).

Introduction

The panel started by observing that like many Western developed countries, Singapore was facing a demographic time bomb, in the form of a rapidly ageing population which was also expected to live longer. In this regard, AI was gaining prominence as a way to augment the traditional physician-hospital caregiver role.

Data Collection

The panel then discussed the foundation of AI: data collection. Through the collection of patient data, companies are able not just to develop a useful medical device, but also glean clinical insights which are in turn sold to patients and clinicians.

Singapore has the necessary quantity of data required to support AI development. However, the data may not be legally accessible, and current data and privacy regulations in Singapore might pose a roadblock for companies wishing to develop AI technology.

Furthermore, the quality of the data may not always be useful. For AI to be applied effectively in a healthcare setting, clinical data had to be combined with wellness and lifestyle data. Yet, in practice, it has been extremely difficult to obtain both to a high degree of quality. Indeed, few research centres in the world currently possess the two.

The next question was: who owned the data? As an individual, what ownership did we retain over our data once we relayed it to a healthcare provider? Could the physician sell patients’ data to a third party for profit? Legally, various stakeholders had good arguments for asserting ownership over medical data.

The next question was: who owned the data? As an individual, what ownership did we retain over our data once we relayed it to a healthcare provider? Could the physician sell patients’ data to a third party for profit? Legally, various stakeholders had good arguments for asserting ownership over medical data.

In this regard, it was arguable that traditional labels such as “ownership” were obsolete. Rather, what was needed was a new set of rules and regulations tailored for data collection. While the European Union’s General Data Protection Regulation has been correct in proposing a new generation of rights relating to data in general, such regulations might not be commercially viable in this particular area. That being said, the regulation did show potential through developing a framework of rights and obligations that companies had to adhere to with regard to “data sharing”. Thus overall, there are still many legal issues that are left to be resolved in the area of data management.

Application of AI in the Healthcare Setting

The panel then discussed AI developments in healthcare. Clinical decision support tools, which captured data to provide enhanced and detailed information for physicians, are using AI. An example was the PillCam, which is being used to replace traditional colonoscopy. Instead of going through the uncomfortable procedure, one could swallow a pill containing a camera, which then took images of one’s colon for analysis. Using an AI algorithm improved analysis further: the AI detects abnormalities more accurately, and five times more quickly than a physician.

AI also has applications in geriatric medicine. AI imaging can now be used for diabetic wound management, where the AI machine could analyse a photo taken with one’s phone and detect if a wound was infected or healing. This can save manpower and healthcare costs. There are also early diabetic screening services such as retinal imaging: through an analysis of a photo, the AI machine can accurately identify potential diabetics.

AI cannot (at present) replace doctors; rather, it serves as an effective tool in helping doctors make efficient and informed decisions.

Reliability of AI and Improving the Quality of Care

The panel noted that AI was only as good as the data collected, and there had to be further measures to incentivise people to share their data. This, of course, had to be balanced with factors such as an individual’s right to privacy and the right to delete their data. Furthermore, in Singapore, the government has been one of the biggest holders of medical records. Thus, before allowing other agents to retrieve medical records, legislation is required to regulate the access and protect against abuse of information.

Another issue has been that the medical industry is still adapting to the digitisation of medicine. Many still use pen and paper, and medical schools might need to teach doctors how to input data. There has also been difficulty translating written records into electronic records, due to the use of short forms by doctors, not to mention illegible handwriting. Hence, to increase the quality of their prescriptions and treatment, it would be essential for doctors first to have a clear understanding of AI devices and how they are used to help.

Furthermore, AI is not immune to the possibility of bias. Because the outputs of AI in healthcare are based off a specific subset of data, it is unclear if the conclusions based on that set of data can be transferred to another set of data with a different group of people. In addition, there have been healthcare studies based on data collected from devices such as Fitbit and Apple watches. However, as the data collection relied on the devices’ sensors, it is almost impossible to cross-check the accuracy of the data collected. Furthermore, these devices were susceptible to being damaged and/ or faulty. Due to these challenges, it is unclear whether we can standardise and trust such data.

Legal Liability for AI

The panel then debated who should be legally liable when AI went wrong. It considered AI in the following scenarios: “human in the loop” and “human out of the loop”.

Human in the Loop

How far do we keep the human within the AI equation? For instance, with regard to self-driving cars, when would people be comfortable to say that there should not be a steering wheel behind the car, or a human who is ready to take over when there is something wrong with the steering wheel? The irony was that people were often preoccupied with the numbers of cases where AI had resulted in the loss of human lives, but they often neglected the fact that humans kill orders of magnitude more due to poor driving. Concerning this point of keeping the human in the loop, is it always the case that the human – here, the doctor – needs to appreciate how the AI came to a decision? At what point do we have to use AI to maintain a standard of care? When do we allow AI to make the decision, and when do we fault the human for allowing AI to make the decision? The panel concluded that as with all areas of legal technology, these were issues with no ready answers, which would have to be litigated in court.

The panel also observed that given the limitations on data collection and standards to date, most of the AI products today would not replace the doctor’s ultimate decision-making (i.e. the human would remain in the loop). While the AI diagnosis of the patient is an important factor in the overall patient management plan, it is not the main entity affecting the outcome of that plan. For example, AI is currently only able to provide a single-point answer, such as to tell a physician that the patient has a tumour. Thus, the question of what to do with the tumour was still left up to the doctor. AI was also still unable to collect intangible data such as a person’s will to live, which was particularly pertinent in healthcare for the aged.

Human out of the Loop

However, some AI technology consciously kept the human out of the loop. For instance, every pacemaker today has a built-in algorithm that reads one’s heart rhythm: if it detects an undesirable pattern, an automatic shock would be sent through the device, with the impact akin to being hit by a horse in the chest. In the past, when the AI malfunctioned, some patients had been hit 60 times in the space of 10 minutes. Luckily, the newer generation of pacemakers will be AI-enabled with machine-learning capabilities, meaning that the AI algorithm becomes smarter as the pacemaker works. And this was beyond human capabilities.

But keeping the human out of the loop has raised other questions regarding legal liability. The US Food & Drug Administration had suggested imposing legal liability on such AI as if it were an actual person. Yet, this too raised questions: would the AI then be fully accountable for everything and hence absolve humans from all related liability? And even if the AI was treated as a legal entity, it could not be put in jail or sentenced to death.

Another question raised was: if the goal was to improve patient outcomes, would it matter that AI was no longer explainable? The lawyer’s response would be that one needed to understand how something worked before one could pinpoint how it went wrong. The doctors also preferred to take a conservative approach to experimentation; naturally so, as human lives were at stake. While being unable to appreciate how things work did not necessarily mean we were unable to use them, if we did not know how the AI worked, we would not be able to learn from it and pass it down.

Final Thoughts

The seminar provided interesting insights into how AI could help improve the quality of care for Singapore’s ageing population. The lack of a clear path on AI’s trajectory of development reflected the many variables present in our society, such as government regulations and the receptiveness of the elderly towards AI devices.

While AI would play a bigger role in the medical field in time to come, it was uncertain whether Singapore had the appetite to allow it to develop to the point where the human was out of the loop. But it was clear that lawyers would have to deal head-on with the difficult questions of who owned data and who should be liable when AI went wrong.

At SGInnovate, we hold discussions across our ecosystem to share and inspire ideas of innovations in Deep Tech regularly. Keep up with our events here.

This article was first published in SMU’s Lexicon, an SMU law student publication.

Technology:
Previous

So You Want to Land that Data Science Internship? Here’s What You Need to Know

Nov 11, 2022

Next

Deep Tech for Good: A Collective Conversation for Singapore

Nov 11, 2022