“Don’t bamboozle me, tell me how your AI actually works. How do you know it is safe?” These are great questions which I want to answer in plain English. I have also added some FAQs at the end.
Florence, our Digital Nurse, engages patients in automated clinical conversations and care coordination via simple SMS text. Florence sends patients messages and provides immediate feedback and answers questions; ‘she’ also provides the healthcare team with notifications based on agreed triggers. Patients respond to Florence by sending readings (e.g. blood pressure or mood scores), confirming certain actions (e.g. taken medications or an appointment booked); patients also ask Florence questions (e.g. ‘What is my target blood pressure?’ or ‘I need a new prescription’).
All messages from Florence to patients are set out word-for-word in a Protocol or in a Knowledge Base. There is therefore no risk of ‘AI making up answers’ or hallucination.
A Protocol contains messages from Florence, including timing and sequencing; it also includes anticipated responses from patients, including reading ranges and thresholds. Think about the Protocol as a predictable, rules-based pathway (some people call the Protocol a ‘deterministic algorithm’). A Protocol can last a few days (e.g. preparation for a procedure), a month (e.g. hospital discharge) or a year (e.g. long-term diabetes management). We have a large library of standard Protocols that have safely and successfully managed over 200,000 patients. When we implement with a client, we start with a standard Protocol for the target cohort and condition and then configure it together to fit their pathway or workflow.
A Knowledge Base contains frequently asked questions (FAQ) or requests we either anticipate prior to launch or real questions and requests from patients that come up during the life of one or more Protocols. These ad hoc messages can be clinical (e.g. ‘Can I drink orange juice before the procedure?’ or ‘What should my target blood pressure be?’) but most relate to workflow (e.g. ‘My blood pressure cuff is not working’ or ‘How do I book an appointment?’).
Flo AI is ‘only’ used to understand patient intent.
The beautiful but challenging aspect of having highly personalized, automated clinical conversations is that you never quite know ‘what’ the patient will message you. Florence needs to be human-like in her interpretation of spelling mistakes, slang, poor grammar, or out-of-the blue questions.
Here are some sample challenges: Unusual formatting (‘my blood p was 135.86’), or responses to questions from Florence (‘did not take my meds, coz cant afford!’), to ad hoc questions (‘what’s my target A1c?’) or information (‘sorry, will miss my appointment, I am in hospital’).
When Florence comes across a patient response that was not anticipated in the Protocol, she then packages up the ‘Context’ and sends it to Flo AI. The Context is a combination of the recent conversation history with that patient, data from the Protocol (condition, expected readings, etc) and certain patient attributes (such as age, or gender, but no individually identifiable health information).
Flo AI queries in iterative steps a public Large Language Model (LLM) and then returns its interpretation of the patient’s intent which is then passed to either the Protocol or the Knowledge Base for next step processing.
Our success with Flo AI has been our ability to fine-tune the iterative ‘conversations’ with the LLM (so called ‘prompt engineering’) using large data sets of real patient responses. In total our training data exceeds 25 million conversations with over 200,000 patients.
Diagram 1: Architecture of Automated Clinical Conversations
Flo AI is Trained on the Job
If the patient asks a question that is not addressed in the Protocol (where we check first) and is also not in the Knowledge Base, Florence will respond to the effect of ‘I am sorry, I don’t have the answer to that, but I will check with my colleagues’. Internally we call these responses DUMs (Didn’t Understand Message).
We have a clinical team that daily monitors Flo AI’s performance and pays particular attention to the DUMs – those messages that Florence was not able to respond to. This data is invaluable in training Flo AI (improving ‘prompt engineering’) and core Florence (refining the Protocol or adding to the Knowledge Base).
For training purposes, we get added power from Flo AI’s categorization of messages and the ability to assess the engagement or sentiment of the patient to certain parts of the Protocol or the Knowledge Base.
The data generated by Florence is also invaluable to our clients in better understanding their patients and providing better, more cost-effective care.
How do you know the AI is safe?
We have developed Florence with safety and privacy in mind from the start following HIPAA and ISO27001, the HHS Trustworthy AI Playbook (TAI) and several clinical risk management standards in the UK and Europe, including DTAC and MHRA/European Medicines Agency Medical Device Class 1.
We have eliminated the risk of hallucination by ensuring that messages to patients are coded in a Protocol or Knowledge Base. This also removes the risk of AI-bias in messages to patients.
We have rigorously tested Flo AI based on our unique training data set totalling over 25 million patient conversations. This testing includes clinical risk, patient experience, and the risk of bias.
We are fully transparent, all decisions made by Flo AI are visible to our clients.
We monitor Flo AI daily with a team of clinicians who flag and analyze potential clinical risk and provide recommendations for improvements in client specific Protocols or Knowledge Bases.
Want to discuss how real, practical AI can help your patients and your team?
Let us set up a meeting to discuss!
Best, Ingolv
Ingolv Urnes, CEO Generated Health
See www.generatedhealth.com or contact:
US: EBender@generatedhealth.com
UK: Kylie.Dentith@generatedhealth.com
Australia: John.Griffiths@generatedhealth.com
Frequently Asked Questions
- Is Florence a chat-bot? No, Florence relies on clinical rules in Protocols and information in Knowledge Bases; ‘she’ is good at understanding patient intent but for safety reasons, Florence does not engage in conversations outside the Protocol or the Knowledge Base.
- Is there risk of Flo AI hallucinating? No, all messages to patients are coded in either the Protocol or the Knowledge Base. Our proprietary architecture is designed to minimize all clinical risk.
- Is any private, protected information from patients shared with any AI models? No, Florence does not share private, protected information with any AI model or service.
- Are you HIPAA compliant? Yes. We also comply with ISO27001 and the HHS Trustworthy AI Playbook (TAI) and several clinical risk management standards in the UK and Europe, including DTAC and MHRA/European Medicines Agency Medical Device Class 1.
- Can Florence ‘speak’ Spanish? Yes; she is fluent in English and Spanish and has been successfully tested in other languages, including Arabic.
- What if the patient is experiencing a medical emergency? Florence is not a substitute for emergency services, however, she can be configured to provide instructions to patients, or, if desired, she can send notifications to the healthcare team.
- Is Florence an app? No, Florence only uses familiar SMS text that works on any cellular phone (even old bricks and flip-phones). There is no need for internet connection, and we avoid all the logistical challenges of downloading an app. Text messaging is the most accessible, equitable solution.
- Why so focused on highly personalized conversations? Treating each patient as an individual is not only the right thing to do, the evidence is clear that it also delivers better patient engagement and better clinical outcomes.
- Can Florence ask open-ended questions? Yes…to an extent; we work closely with our clients to make sure Florence asks questions linked to answers or actions either Florence or the care team can take care of or advice on. A good example is ‘Can you tell me why you are not taking your medication?’
- Can Florence collect SDOH data? Yes, she is excellent at this. Florence gives a patient a chance to think about the answers and experience shows ‘she’ is easier to have sensitive or embarrassing conversations with than with humans.
- Can Florence close care gaps? Yes, Florence can at scale collect the necessary information from patients and support the workflow of closing care gaps.
- How does your ‘model’ learn? When Florence encounters a question or a request that is not part of the Protocol or covered in the Knowledge Base, she can notify the healthcare team and ask them if they want to respond (a nurse can send a direct message from Florence) and if they want the answer added to the Knowledge Base. Our team works closely with the healthcare team to support this fine-tuning.