Every team's AI is building from a different version of reality.

Your teams make decisions every day — product, architecture, compliance, design. Those decisions are scattered across wikis, repos, Slack, and people's heads. Your AI tools build from whatever they find. ArcticRex captures the real decisions and delivers them to every tool.

Click each step or wait to auto-advance

Scan

Here's what your AI tools are actually building from. Contradictions nobody noticed, decisions that expired, questions nobody answered.

ArcticRex — Context Scan
Confluence · 127 pages Notion · 34 docs GitHub · 12 repos, 6 CLAUDE.md Figma · design system Slack · 4 channels Jira · backlog
847 items indexed
23 contradictions
14 stale
8 undecided
PII display: compliance wiki says mask all personal data. Product spec shows full name, email, phone on profile. Design system has no masking patterns.
Auth flow: security policy requires MFA. Onboarding spec says “frictionless single-step signup.” Nobody decided which wins.
API versioning: architecture doc says v2 only. Three CLAUDE.md files still reference v1 endpoints.
…and 20 more across product, design, compliance, and architecture.

Curate

ArcticRex routes each contradiction to the right person. They make the call, ArcticRex captures it as a governed decision.

AR
ArcticRex
PII display rules · 3 sources conflict
🤖
ArcticRex
I found a conflict around PII display. Compliance wiki says mask everything, but the product spec shows full name, email, and phone on user profiles. What should the rule be?
Anna Kovacs
Mask in shared views — lists, admin, exports. Users can see their own data in full. Emails: show first three chars plus domain.
👤
🤖
Got it. I've captured that as a decision and flagged the product spec and design system for update.
✓ pii-display-rules.md created

Govern

Every decision is now a governed artifact — owned, versioned, and auditable. Not a wiki page someone might update. A source of truth with accountability.

ArcticRex — Decision Registry
decisions/compliance/pii-display-rules.md Current
PII Display Rules
Mask PII in all shared views. Users see their own data in full. Shared contexts — lists, admin panels, exports — show masked values only. Email: first three characters plus domain.
Owner: Anna Kovacs (Compliance)
Version: v1 — 10 Mar 2026
Applies to: Product, Design, Architecture
Review due: 10 Jun 2026
History
v1 created — resolved contradiction across compliance wiki, product spec, and design system. Anna Kovacs decided. Product and design specs flagged for update.

Evolve

When the world changes — new regulation, new architecture — ArcticRex flags affected decisions and proposes updates for the right owner to approve.

AR
ArcticRex
Policy change: PII audit logging
🤖
ArcticRex
New policy requires audit logging for PII access. Your masking rules don't cover this yet. Here's a proposed update:
pii-display-rules.md
  Mask PII in all shared views
+ Log all PII access: user, timestamp, context
+ Retention: 12 months minimum for audit
Anna Kovacs
Approved. Add that logging applies to API access too, not just UI.
👤
🤖
Updated. All connected AI tools will pick up the new rules automatically.
✓ pii-display-rules.md → v2

Align

Now every AI tool builds from the same decisions, delivered via MCP. No drift between teams. No one building from stale context.

~/admin-panel
> Build the customer list page.
 
toolarcticrex:get_decisions — query: “customer list, PII, display”
 
claudeI'll build that with masked PII. Your compliance
rules require masking in shared views — names visible,
emails show first 3 chars plus domain (ann***@acme.com),
no phone numbers. Adding audit log entry per row access.
 
▶ pii-display-rules.md v2 via MCP · ArcticRex
claude-opus-4-6 · admin-panel 1 tool call · 3 decisions loaded

Delivered via MCP to any AI tool — coding assistants, autonomous agents, whatever comes next.

Case study — F-Secure

One project, two attempts. Twenty people with every AI tool and no shared context spent a year and never shipped. Five people with governed context shipped in six months.

Both teams had the same AI tools. The difference was that the second team's tools all worked from the same decisions — what to build, how to build it, what constraints applied. No drift between teams, no rework, no surprises.

About the founders →

We hear this a lot

Claude or OpenAI will add this. We'll wait.

They will — for their model. But you're going to use different models for different jobs, and you're going to switch providers as the landscape evolves. Your organisational context needs to work across all of them.

We're managing with CLAUDE.md files.

Great starting point. What gets harder at scale is the process: which decisions are current, what conflicts exist across repos, how updates propagate. The format is solved. The alignment process is the hard part.

We're running POCs. We'll work this out ourselves.

POCs help each team figure out how to work with AI. But each team figures it out independently. Ten successful pilots can still leave you with ten teams building from different assumptions.

Today your teams can brief their AI tools manually. When agents work autonomously, that's not an option. Your decisions either reach every tool automatically, or every tool builds from its own version of reality.

Governed Versioned Auditable Enterprise SSO ISO 27001 in progress SOC 2 in progress Details →

Let's talk.

We work with a small number of design partners.

Book a call