Learn about Confidential AI at our Open Confidential Computing Conference on Mar. 12!

Confidential AI in Clinical Care

How AI Safely Automates Documentation and Relieves Physician Burden

Lorenz Tabertshofer

March 3, 2026

The Documentation Paradox 

Clinical care runs on attention. Physicians are meant to listen, diagnose, and treat. But in practice, doctors spend roughly one-third of their working hours on documentation and administrative tasks. This is not a minor inconvenience – it is a structural efficiency problem that compromises care quality, worsens physician shortages, and makes medicine a less attractive profession. 

Artificial intelligence – particularly large language models and speech recognition – offers a real solution. Automated documentation measurably reduces administrative burden, gives physicians time back for patients, and is viewed positively by patients themselves. At the same time, AI-assisted documentation introduces new data security requirements, especially when cloud-based solutions are involved. 

The Problem: Documentation Consumes Time and Resources 

A 2025 survey of 400 German hospitals illustrates a problem that mirrors findings across the US:  

Physicians spend nearly three hours per day on documentation and administrative tasks – roughly one-third of their working time. 

The scale of the problem is significant: 

  • 88 percent of hospitals report that documentation burden has increased over the last year 
  • If documentation time per physician were reduced by just one hour per day, Germany would gain the equivalent of approximately 22,000 additional physician-hours available for direct patient care 
  • A physician working a 40-hour week loses roughly 12 hours per week to administrative tasks 

This is not an isolated problem at individual institutions. It represents a systemic drain on highly skilled professional time.

The consequences are wide-ranging. Administrative overload contributes to dissatisfaction and burnout, compounding an already urgent physician shortage. Less time per patient means reduced attention and higher risk of error. Wait times grow because available capacity is tied up in paperwork. The economic cost is substantial: every hour spent on documentation is an hour lost to patient care. 

Germany's Federal Ministry of Health confirms the growing burden of bureaucracy and reports that 90 percent of nursing staff feel severely overwhelmed by administrative requirements. 

The central question, therefore, is not whether documentation is necessary – it is how it can be organized more efficiently without compromising safety or accuracy. 

The Solution: AI-Assisted Documentation in Practice 

Modern language models can automatically transcribe physician-patient consultations and generate structured Clinical documentation. The model listens to or reads the conversation, extracts relevant clinical information, and formats it automatically into the required documentation structure, such as clinical history, clinical findings, assessment, and diagnosis. AI systems can also structure free-text notes, summarize documents, and support administrative workflows across the care continuum. The software handles formatting and can link outputs to existing patient data. For physicians, a substantial portion of manual data entry and structuring work is eliminated. 

That this approach works in practice is demonstrated by a real-world pilot at Charité Berlin, one of Europe's largest university hospitals. Approximately 70 physicians tested an AI system for real-time clinical documentation across multiple specialties, with several thousand patient encounters analyzed. 

The results: 

  • Significantly reduced documentation burden 
  • More time for focused, attentive patient conversations 
  • Consistently positive patient feedback 

Prof. Dr. Alexander Meyer from Charité's Institute for Artificial Intelligence in Medicine summarized the findings: 

"Our evaluation shows that the documentation burden was significantly reduced with this technology. Our physicians noticeably had more time for focused conversations with their patients. Patient feedback was consistently positive." (translated from German)

The system has been available in German hospitals since October 2024.  

The number of vendors offering AI solutions for clinical documentation is already growing. But the critical question remains: what happens to the sensitive patient data being processed? 

The Security Problem: Privacy, Compliance, and Trust 

Why Standard Cloud Solutions Fall Short 

AI language models are computationally intensive – they require expensive GPUs and significant processing power. For hospitals and clinics, the path to public cloud infrastructure is often unavoidable. But this is precisely where new problems emerge. 

Standard cloud AI APIs – such as ChatGPT, Claude, or comparable services – operate on the following model: 

  • The physician or hospital sends patient data to the AI in the cloud 
  • The data is processed in plaintext – the cloud provider and its staff can access it 
  • In some cases, inputs are used to improve the underlying models 

GDPR and physician confidentiality rules require the highest level of care when handling patient data. Sending patient consultations to external AI services is legally complex and widely viewed critically by privacy experts. In the worst case, hospitals prohibit AI tools entirely due to security concerns — leaving physicians without precisely the relief they urgently need.

The On-Premises Dilemma 

Some organizations turn to on-premises deployments as an alternative. This seems safer as the data never leaves the building. But in practice, a different set of problems emerges: 

  • GPU hardware is expensive and scarce: Modern AI models require NVIDIA H100-class GPUs or equivalent. Purchase prices run into hundreds of thousands of dollars, and availability remains constrained. 
  • No real scalability: On-premises infrastructure cannot be scaled flexibly to match fluctuating demand. 
  • High operating costs: Running your own data center requires specialized IT staff, cooling, maintenance, and physical security — a continuous drain on IT budgets. 
  • Slower model updates: Cloud providers update their models regularly. On-premises systems often run on outdated versions. 

The Solution: Confidential AI 

Security and scalability do not have to be mutually exclusive. A technology called Confidential AI makes it possible to run AI workloads in the cloud while technically protecting patient data — even during active processing. 

Confidential AI is built on Confidential Computing, a hardware and software technology that encrypts data not only in transit and at rest, but also while it is being processed in memory. This is made possible by specialized processor technologies (AMD SEV-SNP, Intel TDX) and modern GPUs (NVIDIA H100, H200). The core principle: 

  • Processing occurs inside an isolated Trusted Execution Environment (TEE) 
  • The cloud provider, the host operating system, and system administrators cannot access data in plaintext – not even during active AI processing 
  • Remote attestation provides cryptographic proof of exactly what software is running inside the isolated environment. Data security becomes verifiable, rather than dependent on contractual promises from a cloud vendor 

This is fundamentally different from a "secure cloud service." With Confidential Computing, the service provider and cloud operator are technically excluded from the data – not by contractual agreement but enforced by hardware. 

Privatemode AI in Practice 

Privatemode is an AI service that implements this security model. In a real-world healthcare workflow, it might look like this: 

A physician records a patient consultation using a clinical application. The application transmits the audio file to Privatemode over an encrypted channel. Inside a Trusted Execution Environment, the audio is transcribed and processed. The result is returned to the application in encrypted form, where it is converted into a structured clinical note or report. 

The physician gains full AI functionality, without any third party having technical access to the content of the patient conversation. There is no need to worry about whether sensitive data is being exposed. 

Operational advantages at a glance

  • No persistent storage of prompts for model training 
  • Access to high-performance cloud GPU infrastructure without capital investment 
  • Flexible scaling based on actual usage 
  • Integration into existing systems without changing workflow logic 

Already Proven at Scale in Healthcare 

Confidential Computing from Edgeless Systems is already used in Germany’s electronic health record system (the ePA). For more than 50 million insured individuals, a technical operator exclusion is implemented. The backend infrastructure therefore has no access to patient data. The security principle is thus productively established in a highly regulated healthcare environment.

Privatemode AI transfers this same security model to AI applications in clinical care, enabling verifiably protected processing of model requests. NVIDIA describes Privatemode AI as the first generative AI framework that keeps prompts encrypted at all times in a solution brief.

AI in Clinical Care Requires Both Scale and Security 

The situation is clear: 

  • Documentation burden in clinical care is structurally too high. 
  • AI can measurably reduce that burden. 
  • Standard cloud solutions do not fully resolve the data security problem. On-premises approaches, meanwhile, frequently fail on scalability and cost. 

Confidential AI closes this gap. It combines the scalability of modern cloud infrastructure with technically enforced operator exclusion. Sensitive patient data is protected not just organizationally, but architecturally – at the hardware level. 

Confidential Computing is already in production for more than 50 million patients through Germany's national health record infrastructure. Privatemode AI brings this same security model to everyday clinical AI applications. For healthcare software vendors, this means: generative AI can be integrated into clinical workflows without losing control over sensitive patient data. 

Want to learn more about how verifiably secure AI can be used in healthcare delivery? Learn how Privatemode AI can be integrated into your existing system.

Privatemode – use AI without the security and privacy worries

Logo

Made in Germany