AI adoption has huge potential to improve people’s health and healthcare but the road to adoption is marked by a number of challenges. Here, we look at how to address these challenges to adoption – and who is responsible.
Collaboration between public and private stakeholders is vital to achieve widespread adoption of AI in healthcare and a health-focused national AI strategy is required to ensure this cooperation. Such an AI strategy must include medium- and long-term visions and goals, specific initiatives, resources required and performance indicators.
To support the implementation of this strategy, governments must build a health ecosystem ripe for AI adoption, which means supporting the funding, regulation and procurement of AI solutions. AI funding differs from traditional funding practices in healthcare, and so governments must consolidate funding to meet strategic AI priorities. In terms of regulation, governments must work to create a level playing field, in which there are clear standards for data, regulation, access, privacy, and interoperability, as well as shared requirements on data exchange. In particular, privacy regulations must strike the right balance between privacy standards and use of health-related data for AI tools by health researchers and providers. Last, procurement practices by regional and local health providers must be modernised to advance technological adoption.
There are also wider governmental efforts needed to support an AI ecosystem, ranging from easing immigration policies to attract AI professionals, to information campaigns that explain how AI works to patients in order to increase trust in the new technology. Governments can also work to encourage collaboration between medical and engineering universities.
What can regulators do?
Regulation is evidently crucial for ensuring widespread adoption of AI that is trusted by healthcare professionals, the public and patients alike. It is important to have a consistent regulatory approach and to introduce clear definitions of responsibilities for AI solutions into existing health regulations. Moreover, national regulations must clearly define an AI solution as a product or as a tool that supports decision making and include regulation the on rights of patients to access AI tools.
The issue of liability and risk management is a particular challenge for AI adoption in health care. Patient safety is paramount, but healthcare providers also have to think about the professional accountability of their clinicians, as well the protection of their organizations from reputational, legal or financial risk. Ultimately, accountability rests with the clinician under current laws.
What can healthcare providers do?
First, it is critical for healthcare providers to get the basic digitalisation of systems and data in place before embarking on AI deployments. This vision might be achieved through the development of industry standards for digitalisation, data quality and completeness, data access, governance, risk management, security and sharing, and system interoperability; incentivise adherence to standards through a combination of performance and financial incentives.
In terms of workforce, healthcare providers must redesign workforce planning and clinical education to address the needs of both future healthcare and AI-focused professionals. It is necessary to develop and provide ongoing learning regarding AI: clinical training will need upgrading and healthcare professionals will need time and incentive to continue learning. In addition, healthcare providers should include AI experts to ensure delivery of quality AI in healthcare. For this to succeed, it is crucial that there is transparency and collaboration between innovators and practitioners.
Smaller health organisations can benefit from working in innovation clusters that bring together AI, digital health, biomedical research, translational research or other relevant fields. Larger organisations can develop into centres of excellence that pave the way for regional and public-private collaborations to scale AI.
As more healthcare is delivered using new digital technologies, public concerns about how healthcare data are used have and will continue to grow. Healthcare organisations need to have robust and compliant data-sharing policies that support the improvements in care that AI offers while providing the right safeguards.
What can AI professionals do?
AI professionals have a responsibility to strengthen data quality, governance, security and interoperability, which can otherwise be major barriers to AI adoption. As ethical issues – including concerns around and beyond privacy and data – increasingly come to public attention, it is important that AI professionals engage in communication about these issues. Indeed, funding should also not only be attributed to development of AI, but also into dissemination of information and community building.
In addition, AI professionals have the opportunity to open the black box by making AI research and tools explainable.
User-centric design is also essential to the implementation of AI. AI solutions should fit seamlessly into the workflow of decision makers and studies have shown that people follow AI-generated health recommendations more if the latter have more intuitive and engaging design.
What can educational institutions do?
AI in healthcare will require leaders well-versed in both biomedical and data science. New skills such as these require a rethinking of education. For example, medical schools and residency training programs could create opportunities in their curricula for exposure to AI, which could take the form of dual-degree programs that could combine areas of expertise in AI and in medicine.
Contact our experts to learn how we can help your organisation