It is not a matter of if but when. AI is and will be making a very meaningful contribution to healthcare. We need to progress…but with caution. In the debate, people critically disposed to the use of AI often overlook some crucial factors.
- We don’t have enough people (while healthcare demands are increasing): We are in the midst of a crisis in healthcare is triggered by labour shortages. At almost every level and every state the healthcare systems simply do not have enough people. We need to find a way to do more with less staff – it is not an issue of ‘automating away jobs’ but creating more capacity to deliver more, better, and less costly healthcare.
- Most people are not Superman: Humans are not great decision-makers, especially not when stressed or overworked. There is both ‘noise’ and ‘bias’ involved – whether a professor in cardiology or a lonely administrator. Some studies suggest that the better paid and the more experience a professional has, the less data-driven their decision-making. I recommend the work of Daniel Kahneman. It is worth keeping the two terms in mind: ‘Noise’ refers to unwanted variability or inconsistency in judgments or decisions, and ‘bias’ refers to systematic and consistent errors or deviations from true values or objective standards. I recommend the book Noise: A Flaw in Human Judgment by Kahneman, Sibony, and Sunstein.
- Most people are also not great with routine: To enhance care and job satisfaction, we must allocate individuals to roles where the human touch excels, while leveraging machines for mundane tasks and gradually shifting complex routine jobs to them.
- Internet chatbots…NO! People are right to worry about using AI trained on open internet data which contains a lot of noise and bias. The opportunity however is to use tightly controlled, evidence-based data combined with the power of Large Language Models (LLMs) such as OpenAI/ChatGPT. At Generated Health we have over 10 years of systematically collected clinical evidence based on data from over 200,000 patient successes – this puts us in a unique position and allows us to use AI/LLMs to improve our clinical protocols/algorithms at a pathway level (e.g. optimising hypertensive management for patients between 45 and 65 years of age).
- We must experiment, but safely and ethically: The need to understand how AI can work (and not work) we must encourage deliberate, controlled experimentation. And we need tools and frameworks to guide us. It may be helpful to think about a 2 x 2 matrix regarding a proposed experiment/implementation: i) Will patient identifiable data be used to train the algorithm [the answer here should ‘always’ be No]; and ii) Will the output from the AI be used in real-time [the answer here will depend on the use case and risk assessment].
About Florence
Florence drives and enables patient self-management via ‘simple’ text powered by evidence-based algorithms. Florence is proven to deliver better clinical outcomes & patient experience while significantly reducing clinical time and utilization – here some examples:
- Diabetes: 64% less time spent with diabetes patients while improving A1c management
- Hypertension: 50% fewer appointments for hypertensive patients while improving BP management and delivering sustained improvements
- COPD: 81% of patients need fewer appointments; 70% reduction in emergency admissions