abstraction of layers
May 11, 2026

Why a third-party confidential-computing layer is required

Even when your cloud provider offers confidential computing, gaps remain. Here's what they deliver, where they stop, and why Contrast and Privatemode AI complete the picture.

Moritz Eckert

Moritz Eckert

VP Product & Technology

Executive summary 

Confidential computing against the cloud service provider (CSP) is a binary property. Either the CSP and its administrators are structurally unable to read your data, or they are not. There is no “mostly confidential”, no “partial attestation”. The whole purpose of the technology is to remove the cloud provider from the runtime trust boundary, and the chain of evidence that proves this either runs through parties independent of the provider or it does not.

Whether it does comes down to verification. You have to be able to prove, independently and with evidence rooted in hardware, that the workload running on your data is the exact code you intended, on a trusted runtime. Cloud providers, both the hyperscalers and the growing crop of regional and second-tier providers now marketing confidential computing, ship the silicon-level primitives such as confidential VMs and, increasingly, confidential containers. But they ship them inside a stack where the verifier and the party being verified are the same, and where the source code needed to reproduce attestation reference values is not made available to customers or auditors.

Every provider-native attestation flow ultimately depends on that same provider, every stack is proprietary, and none of them integrate end-to-end into the Kubernetes and infrastructure-as-code layer that production cloud workloads actually run on. Against the binary perspective on confidential computing, that is on the wrong side of the line – not confidential.

Edgeless Systems’ products are designed to land on the right side: independent, customer-verifiable attestation rooted in open-source, reproducibly-built code, integrated into Kubernetes the way cloud-native systems are designed to be. 

The decision you are really making 

When you adopt confidential computing in the cloud, you are answering one question: Who is allowed to read this data? The main point of confidential computing is to remove the CSP, its datacenter staff, hypervisor operators, and the shared infrastructure layer from that list completely, not partially. The supporting technology is well-defined: hardware-backed Trusted Execution Environments (AMD SEV-SNP, Intel TDX), runtime memory encryption, and remote attestation, which CSPs offer as primitives. The technology is sound. The question is whether the way it is delivered preserves the binary property the technology was created to provide, or quietly turns it into a softer claim that the CSP can still vouch for itself. 

Adopting confidential computing successfully, however, is not a matter of switching on a VM flag. The decision splits into two harder questions: 

  1. Whose word do I take that the right code is running on the right hardware? The attestation question. 
  2. How do I integrate this into the rest of my cloud, Kubernetes, key management, storage, IaC, and identity, without rebuilding everything around a vendor-specific SDK? The integration question. 

CSP-native confidential computing offerings do not answer either question well. The two pillars below explain why. 

Pillar 1: Attestation must be independent of the party it excludes 

The threat model that gets organizations interested in confidential computing begins with one assertion: the CSP must not be in our trusted computing base. Every claim downstream of that, privacy, sovereignty, regulatory compliance, and multi-party trust, depends on it. 

Attestation is the mechanism that makes the claim verifiable. A workload measures itself, the hardware signs the measurements with a key the CSP cannot forge, and the data owner, or an auditor acting on their behalf, checks those measurements against known-good reference values for the code they expect to be running. The chain of trust ends at the silicon vendor, not the cloud provider. 

This only works if two preconditions are met: 

  • The reference values are independently established. If you don’t know exactly what binary should be running, attestation is just a signed receipt for “something”. This requires the workload’s source to be public and its builds to be reproducible, so you or your auditor can rebuild from source and arrive at the same hash that the attestation reports. Edgeless Systems’ products are open-source and reproducibly built; cloud providers’ confidential-computing stacks are not. 
  • The verification path does not depend on the party you are excluding. If the CSP retrieves the evidence, runs it through their own verification service, and hands you back a “looks good” response, you have not excluded them. You have asked them to police themselves. 

How BSI C5:2026 grades attestation 

The 2026 update of the BSI Cloud Computing Compliance Criteria Catalogue (C5) introduces explicit confidential-computing criteria. OPS-32 requires a documented technical framework using TEEs, hardware attestations, and key management, and explicitly states that “neither the cloud service provider nor any other unauthorized entity shall be able to access the cloud service customer data or the keys used for protecting that data” (OPS-32.03B). OPS-33 then requires that remote attestation be “based on cryptographic means rooted in trusted hard- and software” (OPS-33.02B) and that the cloud service provider expose “an interface that allows the customer to verify the integrity of the remote attestation” (OPS-33.03B). 

OPS-33.01AC ranks four operational scenarios for where attestation evidence is verified. The grading is unambiguous: 

Operational scenario (C5:2026 OPS-33.01AC)Trust level
Customer retrieves evidence from the TEE and verifies it in their own trusted environment.Very strong
Provider runs verification, but customer re-verifies the evidence in their own trusted environment.Very strong
Customer retrieves evidence and sends it to a verification service they trust.Strong
Provider retrieves and verifies evidence, returns only a result to the customer.Weak

Most CSP-native confidential-computing experiences fall into scenario 3 or 4 by default. The C5 grading uses calibrated language, “weak” and “strong”, but the binary view is sharper: in scenario 4, the CSP is verifying itself, and you cannot use the result to prove that the CSP cannot read your data. The promise of confidential computing against the CSP is not partially fulfilled in that scenario, it is not fulfilled at all. Scenarios 1 and 2, the only “very strong” outcomes, are the only ones where the verifying layer is independent of the provider. That is structurally what Contrast and Privatemode AI are designed to deliver: evidence retrieved directly from the TEE, verified by the customer (or by the Contrast Coordinator on the customer’s behalf, itself a confidential, attestable workload), against reference values the customer can establish from open source. 

Pillar 2: A confidential layer must integrate into the cloud you use 

Confidential computing as cloud providers deliver it today is a primitive: a VM that is encrypted in memory, with hardware attestation. But organizations don’t move to the cloud for VMs. They move for the layer above: managed Kubernetes, infrastructure-as-code, identity, key management, observability, GitOps. A confidential layer that ignores that reality forces an unattractive choice between security and operability. 

End-to-end protection for a cloud-native deployment requires more than a confidential VM. In practice, it means: 

  • Confidentiality in use. Workloads run inside CVMs with hardware-rooted attestation, table stakes that CSPs already provide. 
  • Confidentiality at rest and in transit, tied to attestation. Container images, secrets, and persistent state are encrypted with keys released only after successful attestation; pod-to-pod traffic is automatically wrapped in mTLS using identities issued by the attestation authority. Not separate ceremonies, not separate SDKs. 
  • Workload-level attestation. Each pod is measured and verified individually against a runtime policy that pins the exact container image, configuration, and admitted code paths. The data owner verifies all of this before sending sensitive data. Without this layer, attestation has nothing concrete to bind to: it confirms that a confidential VM ran, but not that the code processing your data was the code you intended, or that nothing else was admitted to run alongside it. What looks like a confidentiality claim collapses into a signed receipt for “something ran inside a CVM”, and that does not establish confidentiality against any party, the CSP included. This is the property that regulatory frameworks such as Germany’s Gematik VAU specification mandate for ePA workloads, and that confidential AI services like Privatemode AI require by construction. 
  • Cloud-native, declarative integration. Confidential pods are selected via a Kubernetes RuntimeClass and standard annotations. Existing Helm charts, Kustomize overlays, ArgoCD pipelines, and IaC tools work unchanged. 
  • The same model on every cloud. The Edgeless approach works on bare-metal, on managed Kubernetes services with hybrid bare-metal nodes, and across AMD SEV-SNP and Intel TDX. The proprietary CC offerings from cloud providers, hyperscaler and regional alike, each have their own attestation flow, key release pattern, and SDK; adopting one is as much a portability decision as a security one. 

Where provider-native offerings stop 

The table below summarizes where the gap sits. The CSPs deliver the silicon and the VM. What is missing is the verifying, integrating layer above it. Without that layer, the workload is encrypted in memory, but the binary security property, “the CSP cannot read this data”, does not actually hold, because the CSP still controls the verification path that would prove it. 

ConcernCSP-native offeringEdgeless Systems
Hardware-backed memory encryption (CVMs)YesYes (same hardware)
Attestation independent of the party being excludedNo, CSP-controlled verification pathYes, evidence retrieved and verified by the customer
Workload-level (per-pod) attestationGenerally no, VM- or service-levelYes, each pod is measured and attested
Provider exclusion (CSP and cluster operator off the data path)No, verification depends on the providerYes, enforced by hardware, manifest and runtime policy
Cloud-native K8s integration (RuntimeClass, IaC)Partial; proprietary SDKs and flowsDrop-in, declarative; existing Helm/Kustomize/ArgoCD
Portability across CSPs and on bare metalNo, CSP-specificYes, same model across clouds and on-prem
Encryption in transit with attested workload identitiesNot built inBuilt in
Encryption at rest with keys released only after attestationLimited; usually requires additional servicesBuilt in; keys released only after attestation
BSI C5:2026 OPS-33 attestation strengthTypically weakVery strong

What Edgeless Systems delivers 

Contrast: confidential Kubernetes, end-to-end. An open-source confidential-computing framework that runs each pod in its own CVM, attests it against a cryptographic manifest, and provides a service mesh with mTLS rooted in attestation. Existing containers run unchanged. Reproducible builds let any auditor establish reference values independently. Contrast runs on bare-metal and on managed Kubernetes with hybrid bare-metal nodes. 

Privatemode AI: a confidential SaaS, built on Contrast. It is a hosted GenAI inference service that customers consume through an OpenAI- and Anthropic-compatible API. It is not a Kubernetes layer that you deploy, it is a SaaS that you call. What sets it apart is the confidential layer underneath: the entire service runs on Contrast, every inference worker is individually attested by the client before any prompt is sent, and prompts and responses stay confidential even from Edgeless Systems and the underlying infrastructure. Privatemode AI is the proof point that, with Contrast as the foundation, even a SaaS provider can be excluded from its own customers’ data. 

Both products share the same approach to attestation: open source, reproducibly built, hardware-rooted, and customer-verifiable. Contrast is the cloud-native confidential layer for Kubernetes workloads. Privatemode AI is what that layer enables for one of the workloads where provider exclusion is no longer optional: GenAI inference. 

The bottom line

Confidential computing is a binary property. The confidentiality claim either holds in a way the customer can verify, independent of the parties running the workload, or it does not. Cloud providers expose the underlying primitives, but they cannot also be the party that verifies them, and the offerings they layer on top, even the ones that reach further up the stack, are proprietary and tied to a single cloud, which puts verification and portability back in the provider’s hands. Closing both gaps, verification and integration, is what Edgeless Systems’ products are built for. To land on the verifiable side of that line, and prove it to an auditor, a regulator, or a customer, every link in the verification chain has to be independent of the parties you are trying to exclude, from the code being verified, to the reference values it is checked against, to the cryptographic evidence itself. 

Articles

Further reading

Explore other articles

Privacy-preserving LLM inference

Learn how privacy-preserving AI inference protects sensitive data when using LLMs, from cryptographic approaches to redaction.

Read articleMay 6, 2026

C5:2026: Germany’s BSI makes confidential computing and remote attestation a standard

Germany’s BSI C5:2026 catalogue introduces mandatory criteria for confidential computing and remote attestation. See how Privatemode AI already meets the highest attestation standard.

Read articleMay 5, 2026