There’s a tension at the heart of building products with AI that nobody talks about enough: AI is fundamentally about sharing and connecting data, but some of the best products are built on the principle of not sharing data.
Calendrz, the calendar mirroring tool I’ve been building, is one of those products. Its core promise is that it syncs your availability across calendars without copying your event details. No titles. No attendees. No meeting links. Just lightweight “marker” events that say “this time is taken.”
Building that kind of privacy-first product with AI as your engineering co-pilot raises some genuinely interesting questions. Here’s what I’ve learned.
The Architecture Decision AI Helped Me Make
When I first built Calendrz back in 2020, the architecture was straightforward: read events from Calendar A, create corresponding blockers on Calendar B. Simple enough. But the decisions around what to include in those blockers — and what to exclude — were nuanced.
Fast-forward to 2025-2026, and I’m using AI agents (Claude Code, specifically) as my primary development partner. When I asked the agent to help redesign the mirroring logic, something interesting happened. Its first instinct was to copy event metadata — titles, descriptions, the works. That’s the pattern it had seen most in training data. Calendar sync tools typically copy everything.
I had to explicitly constrain it: “Markers must never contain the original event title, attendees, or description.” Once I set that constraint, the AI became remarkably good at engineering within it. It designed the marker event structure, the customisable summary macros ($domain, $email, $account_name), the visibility toggle between public and private markers — all while respecting the privacy boundary.
The lesson? AI is excellent at optimising within constraints. But it needs you to set the constraints. Left to its own devices, it will default to the most common pattern in its training data — which, for data handling, usually means “more is more.”
Privacy as an Architectural Constraint, Not a Feature
This is a distinction that matters enormously in the age of AI-assisted development.
If you treat privacy as a feature — something you bolt on at the end, like a settings toggle — AI will treat it that way too. It’ll write the full data pipeline first and then add a filter. The filter might have gaps. The data might be logged somewhere upstream. The architecture doesn’t enforce privacy; it merely permits it.
If you treat privacy as a constraint — a hard boundary that the system cannot violate by design — AI produces fundamentally different code. The data pipeline never has the sensitive information in the first place. There’s nothing to filter because there’s nothing to leak.
For Calendrz, this meant the marker event creation service never receives the original event title. It receives a time range, a busy/free flag, the source account metadata (for macros), and the user’s summary format preference. The architecture makes it impossible to leak event details, even accidentally, even if someone makes a mistake later.
AI helped me build this. But I had to frame the problem correctly.
When AI Reads Your Privacy-First Codebase
Here’s where things get interesting from an AI-in-SDLC perspective. Once the privacy-first architecture was established in the codebase, every subsequent AI interaction respected it. When I asked the agent to add new features — the MCP server for AI assistants, the Chrome extension, the out-of-office auto-decline — it consistently maintained the privacy boundary. Not because it “understood” privacy in a philosophical sense, but because the existing code patterns demonstrated it unambiguously.
AI agents learn your codebase conventions. If your conventions are good, the AI amplifies them. If your conventions are sloppy, the AI amplifies that too. The codebase becomes a force multiplier — for better or worse.
This is why I believe the first few hundred lines of code in any project matter more than ever. They set the architectural DNA that AI will replicate across the entire system.
The MCP Paradox: AI Accessing a Privacy-First Product
The most philosophically interesting challenge was building Calendrz’s MCP (Model Context Protocol) server. This lets AI assistants like Claude and ChatGPT read your calendar, trigger syncs, and manage preferences — by connecting directly to Calendrz.
So here’s the paradox: a privacy-first product that gives AI direct access to your data.
The resolution lies in the same principle: constraints. The MCP server exposes your own data back to you, through the AI assistant acting on your behalf. The OAuth tokens for your connected Google and Microsoft accounts are never exposed through MCP. Access tokens are short-lived (60-minute expiry), RSA-signed JWTs. The available tools are tier-gated — free users get read-only access only. And the whole thing is stateless.
AI helped me design and implement these guardrails. Once I specified the security constraints, it produced a remarkably solid OAuth 2.1 implementation with automatic client registration, token rotation, and scope-based access control. The kind of security infrastructure that would have taken me weeks to build solo took days with AI assistance.
What I’d Tell Other Founders
If you’re building a product where privacy matters — and in 2026, that should be every product — here’s what I’ve learned about working with AI:
- Define privacy as a constraint, not a feature. Tell the AI “this service must never have access to X” rather than “add a toggle to hide X.”
- Establish the pattern early. The first privacy-respecting service you build becomes the template for everything that follows.
- Review the data flow, not just the code. AI-generated code can be syntactically perfect but architecturally wrong. Trace what data goes where.
- Test the boundaries. Ask the AI to add features that could violate privacy. If your architecture is right, it physically can’t — and the AI will find the right approach within the constraints.
Privacy and AI are not opposites. But they require intentional architecture. The good news is that AI is excellent at building within well-defined boundaries. You just have to draw the boundaries first.
Calendrz mirrors your calendar availability across Google and Microsoft accounts without sharing event details. It’s the product that taught me most of these lessons.







