Learn about Confidential AI at our Open Confidential Computing Conference on Mar. 12!


Lorenz Tabertshofer
March 3, 2026
Clinical care runs on attention. Physicians are meant to listen, diagnose, and treat. But in practice, doctors spend roughly one-third of their working hours on documentation and administrative tasks. This is not a minor inconvenience – it is a structural efficiency problem that compromises care quality, worsens physician shortages, and makes medicine a less attractive profession.
Artificial intelligence – particularly large language models and speech recognition – offers a real solution. Automated documentation measurably reduces administrative burden, gives physicians time back for patients, and is viewed positively by patients themselves. At the same time, AI-assisted documentation introduces new data security requirements, especially when cloud-based solutions are involved.
A 2025 survey of 400 German hospitals illustrates a problem that mirrors findings across the US:
Physicians spend nearly three hours per day on documentation and administrative tasks – roughly one-third of their working time.
The scale of the problem is significant:
This is not an isolated problem at individual institutions. It represents a systemic drain on highly skilled professional time.
The consequences are wide-ranging. Administrative overload contributes to dissatisfaction and burnout, compounding an already urgent physician shortage. Less time per patient means reduced attention and higher risk of error. Wait times grow because available capacity is tied up in paperwork. The economic cost is substantial: every hour spent on documentation is an hour lost to patient care.
Germany's Federal Ministry of Health confirms the growing burden of bureaucracy and reports that 90 percent of nursing staff feel severely overwhelmed by administrative requirements.
The central question, therefore, is not whether documentation is necessary – it is how it can be organized more efficiently without compromising safety or accuracy.
Modern language models can automatically transcribe physician-patient consultations and generate structured Clinical documentation. The model listens to or reads the conversation, extracts relevant clinical information, and formats it automatically into the required documentation structure, such as clinical history, clinical findings, assessment, and diagnosis. AI systems can also structure free-text notes, summarize documents, and support administrative workflows across the care continuum. The software handles formatting and can link outputs to existing patient data. For physicians, a substantial portion of manual data entry and structuring work is eliminated.
That this approach works in practice is demonstrated by a real-world pilot at Charité Berlin, one of Europe's largest university hospitals. Approximately 70 physicians tested an AI system for real-time clinical documentation across multiple specialties, with several thousand patient encounters analyzed.
The results:
Prof. Dr. Alexander Meyer from Charité's Institute for Artificial Intelligence in Medicine summarized the findings:
"Our evaluation shows that the documentation burden was significantly reduced with this technology. Our physicians noticeably had more time for focused conversations with their patients. Patient feedback was consistently positive." (translated from German)
The system has been available in German hospitals since October 2024.
The number of vendors offering AI solutions for clinical documentation is already growing. But the critical question remains: what happens to the sensitive patient data being processed?
AI language models are computationally intensive – they require expensive GPUs and significant processing power. For hospitals and clinics, the path to public cloud infrastructure is often unavoidable. But this is precisely where new problems emerge.
Standard cloud AI APIs – such as ChatGPT, Claude, or comparable services – operate on the following model:
GDPR and physician confidentiality rules require the highest level of care when handling patient data. Sending patient consultations to external AI services is legally complex and widely viewed critically by privacy experts. In the worst case, hospitals prohibit AI tools entirely due to security concerns — leaving physicians without precisely the relief they urgently need.
Some organizations turn to on-premises deployments as an alternative. This seems safer as the data never leaves the building. But in practice, a different set of problems emerges:
Security and scalability do not have to be mutually exclusive. A technology called Confidential AI makes it possible to run AI workloads in the cloud while technically protecting patient data — even during active processing.
Confidential AI is built on Confidential Computing, a hardware and software technology that encrypts data not only in transit and at rest, but also while it is being processed in memory. This is made possible by specialized processor technologies (AMD SEV-SNP, Intel TDX) and modern GPUs (NVIDIA H100, H200). The core principle:
This is fundamentally different from a "secure cloud service." With Confidential Computing, the service provider and cloud operator are technically excluded from the data – not by contractual agreement but enforced by hardware.
Privatemode AI in Practice
Privatemode is an AI service that implements this security model. In a real-world healthcare workflow, it might look like this:
A physician records a patient consultation using a clinical application. The application transmits the audio file to Privatemode over an encrypted channel. Inside a Trusted Execution Environment, the audio is transcribed and processed. The result is returned to the application in encrypted form, where it is converted into a structured clinical note or report.
The physician gains full AI functionality, without any third party having technical access to the content of the patient conversation. There is no need to worry about whether sensitive data is being exposed.
Operational advantages at a glance
Already Proven at Scale in Healthcare
Confidential Computing from Edgeless Systems is already used in Germany’s electronic health record system (the ePA). For more than 50 million insured individuals, a technical operator exclusion is implemented. The backend infrastructure therefore has no access to patient data. The security principle is thus productively established in a highly regulated healthcare environment.
Privatemode AI transfers this same security model to AI applications in clinical care, enabling verifiably protected processing of model requests. NVIDIA describes Privatemode AI as the first generative AI framework that keeps prompts encrypted at all times in a solution brief.
The situation is clear:
Confidential AI closes this gap. It combines the scalability of modern cloud infrastructure with technically enforced operator exclusion. Sensitive patient data is protected not just organizationally, but architecturally – at the hardware level.
Confidential Computing is already in production for more than 50 million patients through Germany's national health record infrastructure. Privatemode AI brings this same security model to everyday clinical AI applications. For healthcare software vendors, this means: generative AI can be integrated into clinical workflows without losing control over sensitive patient data.
Want to learn more about how verifiably secure AI can be used in healthcare delivery? Learn how Privatemode AI can be integrated into your existing system.
© 2026 Edgeless Systems GmbH