Based on research by Leo Petersen-Khmelnitski
While evidence of the potential benefits of AI applications in health care mounts, a number of challenges to widespread adoption and implementation of AI tools remain.
First, digitalisation of health care remains low. Health care is currently among the least digitised sectors, lagging behind in digital business processes, digital spend per worker, and the digitalisation of work and processes.
Second, there are issues around funding. Currently adopted funding models in national and regional health care systems focus on traditional hardware and drugs but are less clear when it comes to AI solutions. Moreover, there are high costs to AI solutions. Often, significant resources are needed to develop robust AI solutions or make them cost-efficient. Not every hospital will be able to afford to attract new AI talent or have access to enough data to make algorithms meaningful.
Third, there are challenges around the quality of AI services. Currently, many AI services in the health care sector suffer especially around issues that result from poor choice of use cases, design and ease of use, quality and performance of algorithms, and robustness and completeness of underlying data. Building the clinical evidence of quality and effectiveness is a problem that AI faces in health care.
AI in health care usually delivers black box solutions. However, in the medical arena, getting the answer to “why” questions is of critical importance. Health care professionals want to know the reasoning behind a recommendation or diagnosis, as in most national legislations, the responsibility rests with them.
Most AI companies who develop solutions for the health care industry are start-ups, which often means that AI implementation is affected by adoption and other start-up related problems. Large health care providers tend to view these issues as obstacles, and as case studies have shown may lose trust in AI. Moreover, start-ups and health care providers often have conflicting interests: while start-ups are interested in scaling solutions fast, health care practitioners must have proof that any new idea will “do no harm” before it goes anywhere near a patient.
Indeed, for health care professionals, trust is crucial. Health care professionals need to trust AI algorithms and they often want to see clinical validation of an algorithm-generated result before using the solution. As a result, health care professionals can be sceptical about adopting AI tools until a large body of evidence verifies their outcomes.
Compliance is also a challenge to adoption. The handling of patient or health data means that AI companies must be compliant with several different laws and regulations, such as HIPAA, HITRUST and ISO. Currently, few AI solutions have gone through the necessary regulatory hurdles. Moreover, many countries have not introduced AI-specific regulations yet, although there is significant progress in this area, demonstrated by the EU’s proposal for AI legislation. There also potential challenges around privacy regulations. Under existing privacy rules adopted in many countries and in the EU, much of the existing data needed to fully tap the potential of AI cannot be shared with medical researchers or public agencies.
The lack of multidisciplinary cooperation in the development of AI solutions as well as the lack of early involvement of health care staff and limited iteration by joint AI and health care teams are major barriers to addressing quality issues early on and adopting solutions at scale. Only few AI start-up executives feel that input by health care professionals is critical in the early design phase; on the other hand, health care professionals view the role of AI experts as solely focusing on aggregating and storing data, whilst analysis of the data should be left to doctors alone.
Reluctance among the public and patients might also be a challenge to adoption. Various surveys have found that around one-fourth of consumers surveyed will not use AI-powered health services.
There are various reasons given for this reluctance, from not understanding enough about how AI works to concerns that the technology may not understand them.
Scalability is also likely to be an issue for AI adoption in the health care sector. Pilot projects are tested with limited scope and may not be readily adaptable to large-scale institutions. Advanced AI tech solutions may also prove too costly for smaller regional and rural health care providers.
Contact our experts to learn how we can help your organisation