Security and encryption in Privatemode

Privatemode is the first AI service with true end-to-end confidential computing. This page describes why you need this and how it works.

AI with threats
The problem

Existing AI services process your data in plaintext

Current AI services—like ChatGPT for end users and AWS Bedrock for businesses—don't have technical mechanisms in place to enforce data security and privacy end-to-end.

Thus, your data—such as prompts and responses—remains vulnerable to inside-out leaks and outside-in attacks. This is the reason why many business and individuals are reluctant to share sensitive data with AI services.

Potential threats include malicious insiders and hackers.

The solution

Privatemode protects your data end-to-end

In Privatemode, your data is processed in a shielded environment. This environment is created with the help of a hardware-based technology called confidential computing, which keeps your data encrypted even during processing in main memory. The technology also makes it possible to verify the integrity of the environment from afar.

Finally, you can process even your sensitive data with AI.

AI protected against threats

The three pillars of Privatemode

Pillar #1
Encryption at rest, in transit, and in use

End-to-end confidential computing

In Privatemode, prompts and responses are fully protected from external access. Prompts are encrypted client-side using AES-256 and decrypted only within Privatemode’s confidential-computing environment (“the box”), enforced by AMD CPUs and Nvidia H100 GPUs. Within the box, the data remains encrypted in use, ensuring it never appears as plaintext in main memory.

Pillar #2
Remote attestation

End-to-end attestation and verification

The CPUs and GPUs enforcing Privatemode's confidential-computing environment issue cryptographic certificates for all software running inside the environment. With these certificates, the integrity of the entire Privatemode service can be verified.

This is where your Privatemode app or proxy comes into play. It validates the certificates before exchanging any (encrypted) data with the Privatemode service. Thus, you can be sure that your data is only shared with the authentic runtime-encrypted Privatemode service.

Pillar #3
Black-box architecture

Black-box architecture

Privatemode is architected such that user data can neither be accessed by the infrastructure provider (for example, Azure), nor the service provider (we, Edgeless Systems), nor other parties such as the provider of the AI model (for example, Meta). While confidential-computing mechanisms prevent outside-in access, sandboxing mechanisms and end-to-end remote attestation prevent inside-out leaks.

Want to learn more about the security architecture of Privatemode?

Read the security section in the docs