Privatemode is the first AI service with true end-to-end confidential computing. This page describes why you need this and how it works.
Current AI services—like ChatGPT for end users and AWS Bedrock for businesses—don't have technical mechanisms in place to enforce data security and privacy end-to-end.
Thus, your data—such as prompts and responses—remains vulnerable to inside-out leaks and outside-in attacks. This is the reason why many business and individuals are reluctant to share sensitive data with AI services.
Potential threats include malicious insiders and hackers.
In Privatemode, your data is processed in a shielded environment. This environment is created with the help of a hardware-based technology called confidential computing, which keeps your data encrypted even during processing in main memory. The technology also makes it possible to verify the integrity of the environment from afar.
Finally, you can process even your sensitive data with AI.
In Privatemode, prompts and responses are fully protected from external access. Prompts are encrypted client-side using AES-256 and decrypted only within Privatemode’s confidential-computing environment (“the box”), enforced by AMD CPUs and Nvidia H100 GPUs. Within the box, the data remains encrypted in use, ensuring it never appears as plaintext in main memory.
The CPUs and GPUs enforcing Privatemode's confidential-computing environment issue cryptographic certificates for all software running inside the environment. With these certificates, the integrity of the entire Privatemode service can be verified.
This is where your Privatemode app or proxy comes into play. It validates the certificates before exchanging any (encrypted) data with the Privatemode service. Thus, you can be sure that your data is only shared with the authentic runtime-encrypted Privatemode service.
Privatemode is architected such that user data can neither be accessed by the infrastructure provider (for example, Azure), nor the service provider (we, Edgeless Systems), nor other parties such as the provider of the AI model (for example, Meta). While confidential-computing mechanisms prevent outside-in access, sandboxing mechanisms and end-to-end remote attestation prevent inside-out leaks.