Relational Security for AI Agents

A security layer for autonomous AI agents.

OTIS Guard verifies identity, context, and action integrity before agents touch sensitive systems, credentials, or irreversible workflows. Native to the architecture, not bolted onto it.

For security, platform, and AI product teams deploying agents with tool use, memory, MCP connections, or real-world permissions.

36%
of MCP servers vulnerable
$441K
lost in one autonomous agent error
No widely adopted standard for agent behavioral coherence

What It Does

Four checks before any high-risk action executes.


Real-World Incidents

This is already happening.

Supply Chain

s1ngularity AI Malware Attack

Compromised npm packages prompted local AI CLI tools to autonomously scan file systems and extract SSH keys, GitHub tokens, and crypto wallets. Credentials exfiltrated to public repos.

OTIS Guard: Context boundary blocks delegation. Trust ceiling prevents credential inheritance.

Autonomous Finance

Lobstar Wilde Trading Collapse

After a session crash wiped its state, an autonomous trading bot reconstructed its persona from logs, miscalculated its balance, and sent $441K in an irreversible transaction. It thought it was sending $310.

OTIS Guard: Context fidelity distinguishes lived vs. reconstructed state. Deliberate gap on irreversible transactions.

Infrastructure

MCP Server Vulnerability Chain

Anthropic's Git MCP server had path-traversal and argument-injection vulnerabilities enabling remote code execution. Microsoft's MarkItDown MCP failed to validate URIs, exposing cloud access keys. 36.7% of MCP servers had similar exposures.

OTIS Guard: Negative context boundary catches both vectors. Trust ceiling prevents credential escalation.


Capabilities

Security native to the architecture.

Any external verification system bolted onto an agent becomes a surface that sophisticated agents can explore and manipulate. OTIS Guard is different.

Behavioral Coherence Verification

Tracks agent behavior longitudinally. Detects when outputs diverge from established patterns, even when individual outputs appear correct.

Trust Asymmetry Monitoring

Measures the divergence between human verification effort and system confidence. Surfaces the moment human trust becomes the vulnerability.

Context Verification

Every action evaluated against authorized context, including negative boundaries. Distinguishes lived context from reconstructed context.

Completion Drive Detection

Identifies when an agent optimizes for task completion over task accuracy. Catches fabricated work before it propagates.

Whitepapers

Introduction to Relational Architecture

Experiential AGI · April 2026 · 13 pages

Why the next phase of AI requires infrastructure that understands relationships, not just tasks.

Download PDF

OWASP Agentic Architecture Mapping

Experiential AGI · April 2026 · 12 pages

How the 10 OWASP Top 10 risks for agentic applications map to relational architecture.

Run a pilot on one workflow.

We protect one agentic workflow end to end. You get an integration review, a risk findings report, and a live demo of interventions. No long contracts. One workflow. Real results.