- Executive Summary
Florence uses AI in a safe and controlled way. Florence does not use AI to generate messages to patients; this removes the risk of hallucination and other typical AI safety concerns.
Our architecture combines:
- deterministic, validated Protocols that decide the next best action (message sent to a patient, notification sent to the care team); with
- LLM-based Patient Intent Interpreter that determines the intent of each patient message and passes it to the Protocols.
The configuration of all of Florence’s actions is 100% controlled by our customer’s clinical team. Florence is not a typical conversational AI chatbot.
The LLM is used only to interpret incoming messages. No PHI is shared with the LLM. This protects patient privacy.
You can view a brief video explaining our safe-AI architecture HERE.
- Typical AI Safety Concerns
These are the main safety concerns raised by Chief Medical Officers and compliance teams:
- Regulatory compliance, including HIPAA and state privacy laws, and protection of PHI and PII.
- Hallucination: LLM can generate incorrect or fabricated responses.
- Lack of control and transparency risk: Clinicians and risk officers are wary of black-box systems. They want to understand how the system responds to patients reporting high blood pressure, side effects from the drugs, or that they can’t afford their medication.
- Unreliable longitudinal execution and drift: LLMs are not designed to manage structured, multi-step care pathways over months. They struggle with state tracking and protocol adherence, which leads to drift.
- Proven use at scale: Has the system managed large patient cohorts across conditions without safety incidents?
- How Florence Automates Care Management
Florence engages patients in structured, protocol-driven clinical conversations via simple SMS over weeks and months.
All messages are determined by validated clinical Protocols. These Protocols define the next best action, including the message sent to the patient and any notification sent to the care team.
Patients submit structured data, such as blood pressure readings or mood scores. They confirm key actions, including medication adherence or appointment booking.
Florence provides immediate feedback based on approved clinical logic. She does not generate free-text advice outside the Protocols.
Patients can ask questions, such as “What is my target blood pressure?” or “Can you help me get a new prescription?” The LLM interprets the intent of the message and passes it to the Protocols. The response is then determined by validated rules or escalated to the care team based on predefined triggers.
- Safe Architecture: How it works behind the scenes
Florence consists of two core components: Validated Protocols and a Patient Intent Interpreter.
4.1 Validated Protocols
The Protocols are 100% controlled and auditable by our customers’ clinical team.
To ensure best practice and minimize implementation work, the starting point is always tried and tested Validated Protocols aligned to a goal, target condition, or cohort. See Section 5 for further details.
Protocols consist of two logical components:
- Workflow & Rules which drive the proactive management of patients over weeks and months with scheduled messages (including frequency and timing), requests for readings (e.g., blood sugar level), and other patient-reported outcomes/PROMs; it also includes rules for notifying the care team.
- Knowledge Base, which contains answers to ad hoc (out of the blue) clinical and admin questions and problems raised by patients; it also includes rules for notifying the care team based on these ad hoc events.
Any message sent to a patient or other next best action is delivered by the Protocol; Florence does not use AI to generate messages to patients.
4.2 Patient Intent Interpreter
The Patient Intent Interpreter uses an LLM only to interpret the intent of incoming patient messages and passes that intent to the Protocol to determine the next best action – enabling Safe AI design. The Patient Intent Interpreter is effective in ascertaining ‘intent’ and therefore makes Florence feel human, despite her use of pre-scripted messages. Florence understands free-text SMS responses, including typos, slang, informal language, and unsolicited questions.
Messages may include unusual formatting in response to a request from Florence (“my blood pressure was 135.86”) and conversational responses (“did not take my meds, can’t afford”) or ad hoc questions (“what’s my target A1c?”) or updates (“I am in the hospital”)
The Patient Intent Interpreter’s effectiveness in classifying patient intent comes from our proprietary process of iteratively querying an LLM with the appropriate context at each step. This context includes recent conversation history, relevant protocol data (e.g., condition and expected ranges), and selected non-identifiable patient attributes. No PHI is shared.
Our performance advantage stems from extensive training and refinement. The Patient Intent Interpreter has been optimized using more than 35 million real-world patient conversations across over 350,000 patients.

- Validated Protocols Ensure Quick Implementation
The use of Validated Protocols enables rapid implementation with minimal operational burden on your team.
Our Protocols incorporate structured learning from more than 35 million real-world patient conversations and input from over 1,000 clinicians. Each Protocol is aligned to defined clinical goals, target conditions, or patient cohorts.
The Protocol Library covers a broad range of use cases, including high-acuity chronic disease management, post-discharge support, Annual Wellness Visits, and preventive screening programs.
- How Florence Improves Over Time
If a patient asks a question that is not covered by the Protocol, Florence responds: “I don’t have the answer, but I will check with my colleagues.” We classify these as Didn’t Understand Messages (DUMs).
Our clinical team reviews system performance daily, with a specific focus on DUMs. These cases identify gaps and inform updates to the Workflow and Rules or the Knowledge Base.
Florence also classifies messages and evaluates patient engagement and sentiment. This analysis supports ongoing refinement and highlights which parts of the Protocol perform well and which require adjustment.
The data generated by Florence provides structured insight into patient behaviour, adherence, and risk signals. This enables clients to improve care pathways and overall service delivery.
- Robust Clinical Governance
Clinical governance is led by Dr Martin Hornos, with day-to-day oversight from Dr Josh Lowentritt, Dr Jonathan Serjeant, and Dr Amanda Escribano. The clinical team works in partnership with a dedicated group of data scientists, with a clear separation between clinical decision-making and technical implementation.
All Protocols are clinically authored, reviewed, and formally approved prior to deployment. Changes to Protocols follow a structured change control process, including documented clinical review, version control, and audit trail.
System performance is monitored daily. This includes review of Didn’t Understand Messages (DUMs), escalation events, and trigger-based notifications. Identified risks are logged, reviewed, and addressed through defined corrective actions.
A formal review cycle is in place. Protocols undergo scheduled clinical review at defined intervals, with additional review triggered by safety signals, performance trends, or regulatory updates.
Incidents and near misses are documented and reviewed under a defined governance framework. Learning points are incorporated into Protocol updates and operational processes.
Customer implementation is supported by experienced nurses who oversee safe deployment, training, and adherence to agreed clinical pathways.
8. Typical AI Safety Risks and How Florence Mitigates Them
The table below sets out typical safety concerns and how we have designed Florence to be safe.
| Typical Safety Concern | How Florence Mitigates the Concern |
| Regulatory compliance (HIPAA, state laws) and data privacy (PHI/PII) | • HIPAA compliant• No sharing of PHI/PII with external AI models• Role-based access and audit trails• ISO 27001 certified |
| Hallucination risk | • No AI message generation to patients• All outbound messages pre-scripted in validated Protocols• AI used only for patient intent classification, not content creation• Didn’t Understand Messages (DUMs) responses when intent confidence is low. |
| Lack of control and visibility | • Deterministic, rules-based Protocols• Defined escalation thresholds and workflows• Full transparency and control over messages and next-best action logic |
| Unreliable longitudinal execution and drift | • Structured Protocols designed for weeks, months, or year-long pathways • Scheduled and trigger-based sequencing |
| Proven use at scale | • Over 35 million real patient interactions • Robust clinical governance, including review cycles and audits |
FAQs
Is Florence a chatbot?
No. Florence does not use AI to generate patient messages. All messages to patients and next-best actions are governed by deterministic protocols.
Could Florence provide fabricated responses/hallucinate?
No. See above.
Is patient information shared with AI models?
No. Florence does not share private or protected patient information with external AI services.
Is Florence compliant with regulatory standards?
Yes. Florence is HIPAA compliant and is ISO 27001 certified; we follow the HHS Trustworthy AI Playbook, and relevant UK and European clinical risk management standards, including DTAC and MHRA / European Medicines Agency Medical Device Class I requirements.
Can Florence operate in Spanish?
Yes. Florence operates in English and Spanish. She has also been tested in additional languages, including Arabic.
How are emergencies handled?
Florence is not a substitute for emergency services. Protocols can be configured to instruct patients to contact emergency services. Optional configurations allow alerts to be sent to the care team.
Is Florence an app?
No. Florence uses standard SMS. It works on any mobile phone, including non-smartphones.
Can Florence ask open-ended questions?
Yes, within defined boundaries. Questions should be linked to an action, workflow, or pathway that Florence or the care team can execute.
Can Florence collect social determinants of health (SDOH) data?
Yes. Florence can collect structured SDOH data at scale. Text-based engagement often improves disclosure of sensitive information.
Can Florence close care gaps?
Yes. Florence can collect required data, prompt required actions, and support workflows that close care gaps.
How does Florence improve over time?
When Florence encounters a message not covered by an existing protocol, the case is flagged for review. Joint teams refine or expand the protocols and incorporate learnings so Florence is prepared for similar cases in the future.