🚀 Confidential AI coding assistants are here! Get started now.
Every week brings new headlines about AI in browsers. We’ve been digging into the most popular ones and what we found is worrying. Today, we want to share a safer, more private way to approach "memory” in the browser. To show what we mean, we’ve built an open-source AI extension for Google Chrome that connects to our Confidential Inference API – a system designed for privacy from the ground up.
Felix Schuster
October 21, 2025
Update (Oct 22): A few hours after we published this post, ChatGPT Atlas by OpenAI made its debut - and it aligns perfectly with the trend we described. Atlas features a “Browser Memories” option and emphasises privacy-friendly design by allowing users to control what gets stored and used for inference.
That’s a step in the right direction, but it raises the same question we raised earlier: when browsing context becomes baked into an AI system, can users really keep track of what is remembered and used? We share the same doubts as we did with other players.
The race for AI-enabled browsers is in full swing. Google Chrome's new AI features, Perplexity Comet, and the Dia browser (whose maker The Browser Company was acquired by Atlassian just last month) are three prominent examples.
We highlight these three because their announcements all revolve around a feature called “memory.” Unlike a traditional browser history, it allows semantic search through your browsing activity and lets large language models use that information as contextual input for their answers.
The power of that feature is obvious. Privately and at work, we do almost everything inside the browser. If an LLM can see that context, it can help us remember what we’ve seen and even surface new ideas. When OpenAI introduced persistent memory for ChatGPT, the reaction was similar. Suddenly the assistant seemed to understand us on a deeper level.
These features are as practical as they are privacy sensitive. End users have learned that there’s no free lunch on the internet, especially when personal data becomes part of the product itself. It is very likely that the next generation of advertising will draw on these new layers of behavioral data, and the line between helpful and intrusive will blur quickly. For instance, Perplexity’s CEO said plainly before Comet's launch that it will track everything users do online to sell hyper-personalized ads.
Dia, under Atlassian’s ownership, will pursue a different path. Productivity within the SaaS ecosystem is the focus. As Atlassian’s CEO explained in the acquisition announcement, connecting data across enterprise tools creates massive efficiency gains. But when you think about it, this is the same reason why it also introduces a massive new security exposure.
In the past, a CISO only had to secure each silo individually. Now, when an AI can infer links between silos, the potential for data leakage grows exponentially. Atlassian was quick to reassure customers: “Security, compliance, and admin controls will be baked into every aspect of Dia.” But not every organization will pay for an enterprise plan, and many will find the very idea of cross-silo AI memory too risky to begin with. (And beyond that, there are entirely different issues to consider when thinking about workplace use, as this supposedly funny Instagram video from Dia illustrates, showing what a performance review based on your browsing history might look like.)
So yes, skepticism is warranted. And especially in the world of AI browser extensions, history hasn’t been kind. Every tool examined in a University College London (UCL) study leaked private data to the cloud without sufficient transparency or user consent.
When we reviewed what is publicly known about the new native implementations in Chrome, Comet and Dia, some positive signs appeared. There is more on-device processing and attempts to prefilter sensitive data like financial information, health records, or private communications. But at the end of the day, the same core problem remains. The actual inference happens in the cloud, and users have no insight into what exactly is sent there.
We believe “Memory” is a great idea. To be genuinely powerful and helpful, it needs to understand a lot about you. But how can that work while maintaining the level of privacy it deserves? As long as you want to harness the power of large cloud-based LLMs, there’s only one answer: Confidential Computing.
To make this real, we built Privatemode AI. Our API provides access to LLMs that run inside confidential computing environments, where data remains encrypted from end to end and is only decrypted inside a hardware-isolated enclave. Even the cloud provider – including us – cannot see prompts or results. This guarantee is verified through remote attestation, delivering cryptographic proof instead of policy promises.
That’s how browser “memory” should work: useful and persistent, yet opaque to everyone else. To demonstrate the concept, we built a small Chrome extension. It lets you chat with an LLM while keeping a local memory of visited pages. The LLM can then access that memory when responding to you, just like a human would recall prior context.
Like all our client-side software, the extension is open source for full transparency. Developers can download it from GitHub, run it on their machine, and audit every line.
For now it’s only a demo. Our core product is the inference API itself, built for anyone who wants to integrate privacy-first AI into any app. If you’re building your own AI tool or integration, there’s no need to reinvent the security layer. Use our Confidential Inference API and keep your users’ data cryptographically safe.
If you’re not a developer but want a privacy-first AI browser extension, join our interest list. If there’s enough demand, we’ll scale development and turn the prototype into a full product.
Want a truly privacy-first AI browser extension to exist?
Leave your email. If enough of you do, we’ll bring this to production.
© 2025 Edgeless Systems GmbH