From GDPR and the EU AI Act to MDR regulatory clarity and on-premise hosting — Medicus AI meets the highest standards for data protection, regulatory compliance, and AI safety.
Our platform is designed to meet the regulatory requirements of healthcare providers across Europe and the Middle East.
Full compliance with the General Data Protection Regulation. Data processed and stored within the EU with appropriate safeguards for any international transfers.
Medicus AI products are non-medical informational software. Under MDR Article 2(1), they do not qualify as medical devices and are outside the scope of Annex VIII classification.
Our AI components qualify as limited-risk AI systems under Article 52, with transparency obligations fulfilled through clear user disclosures.
Our platform supports laboratory partners operating under ISO 15189 accreditation, ensuring outputs meet clinical quality and competence standards.
Medicus AI products are non-medical informational software outside the scope of the EU Medical Device Regulation — fully aligned with MDR Article 2(1) and MDCG guidance.
Medicus AI products are informational software components designed to improve user comprehension and accessibility of health data. They summarize and explain insights without performing independent medical reasoning or influencing clinical decisions.
Under MDR Article 2(1), software qualifies as a medical device only if it has a medical purpose. Our products do not fulfil any medical purpose as defined by the MDR and therefore do not qualify as medical devices.
Per MDCG 2019-11, software that summarises medical information without medical interpretation is not a medical device. MDCG 2021-24 further clarifies that AI-based software that does not alter medical meaning or generate new medical conclusions is not Medical Device Software.
Our products are intended to generate readable summaries, explain biomarkers and trends, support health literacy, and guide users — never to diagnose, treat, or replace healthcare professional consultation.
Our AI components are classified as limited-risk AI systems under Article 52, with full transparency obligations met through clear user disclosures.
The LLM-based summary and chatbot qualify as AI systems under the EU AI Act. They do not fall under prohibited or high-risk AI categories due to their conversational and explanatory nature.
As limited-risk AI systems, they are subject to transparency obligations — not the extensive requirements of high-risk systems. Users are clearly informed when interacting with AI-generated content.
Clear user disclosures are provided throughout the product, ensuring users understand that summaries and chat responses are AI-generated explanations of existing medical outputs.
Our AI components do not engage in any prohibited practices defined by the Act — no social scoring, no subliminal manipulation, and no exploitation of vulnerabilities.
Multiple layers of protection ensure patient data is secure at every stage — from ingestion and processing to storage and delivery.
All data encrypted with AES-256 at rest and TLS 1.3 in transit. No unencrypted patient data is ever stored or transmitted.
All production systems and patient data hosted within EU data centres, ensuring compliance with data residency requirements.
Role-based access control (RBAC), multi-factor authentication, and comprehensive audit logging for every data access event.
Personal identifiers are separated from health data at ingestion. Analytics and model training use only anonymised, aggregated datasets.
Regular third-party penetration testing and vulnerability assessments. Findings are remediated on a prioritised timeline.
Documented incident response plan with defined SLAs. GDPR-compliant breach notification within 72 hours of detection.
Our LLM deployment follows strict governance principles — curated sources, hallucination prevention, and continuous medical oversight.
Our LLM layer is restricted to validated, curated medical sources — peer-reviewed literature, clinical guidelines, and our proprietary medical ontology. No open-ended internet retrieval.
Multi-layered validation pipeline ensures AI outputs are grounded in source data. Responses are cross-referenced against the deterministic Smart Reports engine.
Medical content is continuously reviewed by our in-house medical and science team. AI outputs supplement, never replace, clinical review processes.
Input and output guardrails prevent misuse, off-topic responses, and generation of harmful or misleading health content.
Choose the deployment model that best fits your organisation's data sovereignty and operational requirements.
Fully managed deployment on EU-hosted cloud infrastructure. Fastest time to value with automatic updates and scaling.
Full deployment within your own infrastructure. Patient data never leaves your environment. Ideal for maximum data sovereignty.
Core processing on your infrastructure with select cloud services for updates and analytics. Balance control with convenience.
Get a detailed walkthrough of our security architecture, compliance certifications, and deployment options.