AI Safety Engineer & Researcher
Building trust infrastructure for autonomous agents
Agent governance, policy engine design, risk classification, safety benchmarking, and audit trail architecture for teams deploying autonomous agents.
Build MCP servers, policy gateways, zero-trust agent architectures, and cryptographic audit systems. From design through deployment.
Adversarial testing of AI agents and LLM systems. Prompt injection defense, tool-use safety evaluation, compliance gap analysis.
Available through expert networks for deep dives on agent safety, MCP protocol, AI governance frameworks, and enterprise AI deployment strategy.
Building the fail-safe policy gateway for AI agents. Every agent action gets classified by risk, routed through configurable policies, and stamped with a cryptographic receipt.
Open-source AI safety benchmarking and research lab. Running transparent benchmarks, publishing open leaderboards, and providing the community with real safety data.
48-bot autonomous network running in production. Handles research indexing, content distribution, bounty hunting, code auditing, and cross-platform publishing, fully autonomous.
Built and maintained inventory management, ERP integrations, and operational infrastructure across multiple business domains.
Risk classification, policy routing, and cryptographic audit trails for autonomous agent governance. Zero-trust architecture with configurable action gating.
Open-source safety leaderboards for AI agents and MCP servers. 250+ indexed research papers, weekly frontier roundups, red-team reports.
Production autonomous network: research indexing, content distribution, bounty execution, code auditing. TypeScript monorepo with Electron desktop app.
Available for AI safety contracts, consulting engagements, and expert network calls. Remote-first, based in Chicago.