Use Confidential AI with OpenCode

OpenCode is an open-source AI coding agent for the terminal. With Privatemode, every prompt and code snippet is encrypted end-to-end through confidential computing, keeping your codebase fully confidential.

OpenCode logo

Introduction

Use OpenCode for AI-assisted coding without exposing your source code

opencode terminal

Terminal-based AI coding needs cloud models

OpenCode is an open-source agentic AI coding tool that runs in your terminal. It connects to cloud LLM providers to power code generation, editing, and reasoning across your project. For teams working with proprietary code or regulated applications, sending source code to a cloud provider means losing control of sensitive IP.

Privatemode keeps your code confidential

Privatemode provides an OpenAI-compatible API backed by state-of-the-art models running inside confidential computing environments. Configure OpenCode with a custom provider pointing to the Privatemode proxy and your source code, prompts, and AI responses are encrypted end-to-end. Not even Privatemode can see your data.

Hardware-enforced, not just policy

The Privatemode proxy runs locally on your machine and encrypts all data before it leaves. On the server side, inference runs inside hardware-isolated confidential computing environments (AMD SEV / Intel TDX). The proxy verifies server integrity through remote attestation before every session. Your code is never stored and never used for model training.

Benefits

Why use Privatemode AI with OpenCode?

End-to-end encrypted coding sessions

Every prompt you type and every file OpenCode reads is encrypted by the local Privatemode proxy before leaving your machine. Responses are decrypted only on your device. The models run inside confidential computing environments protected by AMD SEV and Intel TDX hardware. Your source code, architecture decisions, and business logic remain confidential throughout the entire session.

Open-source AI coding, fully private

OpenCode is open source and connects to Privatemode through the standard OpenAI-compatible API. Configure a custom provider in your opencode.json file and start coding with gpt-oss-120b, a 120-billion-parameter model with 128k context and strong code generation capabilities.

Easy to set up

You only need to add the Privatemode proxy as AI provider in your opencode.json. Your existing OpenCode setup gains end-to-end encryption in minutes.

Set up OpenCode to use Privatemode

Follow the step-by-step guide in the documentation to configure OpenCode with Privatemode as your inference provider.

FAQ

Frequently asked questions about OpenCode with Privatemode

Privatemode currently offers gpt-oss-120b, a 120-billion-parameter model with 128k context and strong code generation capabilities. It runs inside confidential computing environments with full encryption. Configure it as your model in opencode.json with the provider set to privatemode.