Privatemode is the first AI service with true end-to-end
confidential computing. This page describes why you
need this and how it works.
Security and encryption
The problem

Most AI platforms, including popular services like ChatGPT or AWS Bedrock, decrypt your prompts on their own servers and run models on that plaintext. They lack true end‑to‑end mechanisms, so at least one internal system inevitably sees and can store your data.
When prompts and responses are handled in plaintext, they are visible to privileged operators, internal services, logging systems, and integrated tools. This makes them vulnerable to both inside‑out leaks from malicious insiders and outside‑in attacks from hackers who target those systems.
Too risky for sensitive and regulated data
Because of this exposure, security and compliance teams treat generic AI services as unsafe for personal, financial, or other regulated data. Many organizations—and even individual professionals—either block these tools or avoid sharing sensitive information with them altogether to prevent breaches.
Security and encryption
The solution

Your prompts and responses are processed inside hardware‑isolated confidential‑computing environments, not in generic cloud VMs. Data stays encrypted in transit, at rest, and even in main memory, so it never appears in plaintext on the surrounding infrastructure.
Before any request is decrypted, remote attestation verifies that only audited code and approved models are running in the enclave. This gives your security and compliance teams a technical proof of integrity instead of relying solely on provider policies and contracts.
Because encryption extends all the way through model execution, you can safely use generative AI on personal, financial, or other regulated data. Workloads that were previously blocked—like handling customer records, contracts, or health information—become viable without changing your privacy posture.
Foundations
Contrast is the most advanced platform for confidential computing at scale. Contrast shields entire container deployments on Kubernetes with confidential computing and. Privatemode is made possible by Contrast.
Security and encryption
Three pillars for E2E data encryption
Prompts and responses are encrypted by the client proxy and decrypted only inside runtime‑encrypted workers running in confidential‑computing enclaves. Data is re‑encrypted before it leaves the worker, so the surrounding infrastructure never sees plaintext.
Hardware‑backed attestation reports prove exactly which software and models are running in the enclave. The client verifies these reports before exchanging keys or prompts, so any tampered or downgraded environment is rejected by default.
The enclave shields the AI worker and its memory from the rest of the stack, including the cloud provider and Privatemode operators. Keys and plaintext never leave this boundary, which means infrastructure, service operators, and model vendors cannot read your data.
The docs and open‑source components describe the full architecture from client proxy to AI workers, including key flows and threat model. With reproducible builds and integration guides, security and engineering teams can verify the design and plug it into their own environment with confidence.

FAQ
Technical Details
Privatemode encrypts your data before it leaves your device and keeps it protected even during AI processing. On the client side, the Privatemode proxy manages remote attestation and end-to-end encryption. It encrypts all inference requests and decrypts AI responses, handling all communication with the service. Encryption keys are never shared with anyone outside of your local proxy and the isolated AI worker
We're happy to show you around and give an overview of what's possible.