This is the second part of the interview (the first part here) with Sahil Bansal, an experienced AI practitioner in healthcare, a dedicated writer on this topic, and a co-founder to several successful startups in the area. Below is our talk with Sahil about his vision to AI in healthcare, and about ethical and legal challenges on the way to achieve this vision:
I would say, not quite yet, but AI is well on its way to becoming an extremely viable contender for being declared the OS of Healthcare. When we say Operating System, we mean that AI will power every aspect of healthcare, right from appointment scheduling to diagnostics and surgery to tracking outpatient care. This is a distant but extremely plausible future.
The pros of course include significant breakthroughs in medicine. AI can combine the knowledge of mental health specialists, dieticians, surgeons, and research specialists, over thousands of years, and combining data from each of these fields, can help us make deductions that would take lifetimes of effort, within days.
The cons include the resources required to ensure that the data being fed to the system is accurate. This is of utmost criticality. Hence, the collection of patient data would have to be monitored and regulated stringently. This would bring in increased complexity than monitoring data manually.
However, in my opinion, at the stage we are at today, the pros strongly outweigh the cons.
I would say that ethical concerns are being raised about AI in general, and not only in healthcare.
The issues related to informed consent can be taken up by educating people about what it entails to have your data collected and how it can be used. For instance, if I let my dentist save my root canal surgery details, and if I let my general practitioner save the date of my migraines, I might be okay with a system someday correlating the two. Of course, this is a simplified explanation, but as long as people understand how their data is used, they can decide whether it is data that is safe to share, or whether they feel the need to shield the data. This awareness can only come with an understanding of how data is collected, used, and shared. Transparency is required for the companies that collect data.
Biases in the algorithms are more complex. Regulatory bodies are being set up across governments to ensure that recommended practices are followed while using AI. Due diligence is actively done to ensure data is being used appropriately.
This is a loaded question! One of the technical ways in which we can circumvent concerns regarding privacy and data protection is by the use of generative data. Generative models create realistic but simulated data. This helps remove any connection to the real individuals.
Due to how complex the potential for legal risk is when discussing AI in healthcare, it is important to have contracts to remark on the rights and duties of the involved parties, as well as how liabilities will be handled. As AI matures, these regulatory bodies will need to mature as well.
I believe the adoption of AI in healthcare is inevitable. In a certain capacity, all types of tasks will be conducted by AI, from admin, to surgical. Today medical research is a slow and arduous task, with a lot of time invested in cross-checking facts. This is an area in which AI will revolutionise.
We will hopefully be able to understand the cause of several more illnesses, including mental health issues, genetic issues, and pandemics! The exact future ahead cannot be predicted, but at the rate of development we are going at, we can say with considerable confidence that we will make leaps and bounds of progress in understanding the human body.
Contact our experts to learn how we can help your organisation