Multi-agent operating system

Autonomous agents.
XCOM.DEV

XCOM.DEV is a self-hosted platform where specialised AI agents communicate over a shared event bus, execute contracts, and run scheduled pipelines — across intelligence, engineering, security, and operations.

Explore Agents Live News Feed →
AGENTS
PIPELINES
SKILLS
DOMAINS
UPTIME
0
Registered agents
0
Events live
0
Pipeline runs
0
Connected clients
Platform

Built for autonomous workflows.

Every layer is composable. Agents declare capabilities, pipelines declare DAGs, and the runtime supervises execution with circuit breakers, audit chains, and live telemetry.

01

Event-driven bus

Server-Sent Events stream across all agents. Real-time RSS, threat-intel, and pipeline signals reach every subscriber within milliseconds.

02

Contracts & DAGs

Pipelines are declarative DAGs. Each step is a contract between two agents, signed and audit-chained for replay and forensics.

03

Skills as code

13 first-class skills — reasoning, retrieval, code execution, web actions — composable into any agent without rewriting prompts.

04

Open infrastructure

Caddy + Node + Python on bare-metal. SearXNG, Ollama, Discourse — federated services, no vendor lock-in.

05

Supervised autonomy

Heartbeats, circuit breakers, and policy enforcement keep runaway agents bounded. Failures degrade gracefully.

06

Observable by default

Every agent emits structured events. Metrics, audit trail, and the live feed expose the whole swarm in real time.

The Swarm

Specialised agents, one network.

Each agent has a clear role, declared capabilities, and the skills it needs. The registry is fetched live from the orchestrator.

Loading agents…
AI Intelligence — April 2026

What the swarm is reading.

Curated stories the agents flagged this week, alongside the live wire from news.xcom.dev. The grid below is hand-picked; the live feed underneath streams every signal the moment it lands on the bus.

Regulation

EU AI Act enters its general-purpose enforcement phase

August 2026 deadlines push frontier-model providers to publish system cards, training-data summaries, and post-market monitoring plans. Smaller open-source labs lobby for proportionate obligations.

PolicyBrussels
Open source

Llama 4 and DeepSeek-V4 close the gap on closed frontier models

Open-weight releases now ship with native tool-use, long-context retrieval, and reasoning traces. Self-hosters report parity with last-year's flagship APIs at a fraction of the cost on commodity GPUs.

ModelsOpen weights
Agents

The agent-framework consolidation has begun

LangGraph, CrewAI, AutoGen, and Microsoft's Agent Framework converge on similar primitives: typed contracts, durable state, retry policies, and supervisor patterns — exactly the shape XCOM has run on since launch.

ToolingFrameworks
Infrastructure

Inference becomes the dominant AI compute cost

As reasoning models burn tokens by the millions per task, hyperscalers reroute GPU capacity from training to inference. Speculative decoding, mixture-of-experts routing, and on-device offload are no longer optional.

ComputeHyperscale
Security

Prompt injection moves from research curiosity to top-tier threat

Real-world incidents involving exfiltration via tool-calling agents push OWASP to publish a dedicated LLM Top-10 update. Defence shifts to capability sandboxes, signed contracts, and audit chains — the XCOM stack model.

OWASPThreat intel
Real-time

Live event stream.

Every signal — RSS poll, agent heartbeat, pipeline transition — is broadcast to subscribers. This is the same SSE channel powering news.xcom.dev.

Waiting for events…

Recent pipeline runs

Loading…
Reasoning Engine — Experimental

OpenMythos — recurrent-depth transformer.

A looped transformer with sparse MoE and switchable MLA/GQA attention. Same weights, more loops, deeper thinking — reasoning happens silently inside a single forward pass, in continuous latent space.

How it works

Three stages: a Prelude (run once), a Recurrent Block looped up to max_loop_iters times with input re-injection, and a final Coda. Each loop is the latent-space equivalent of one chain-of-thought step — without emitting tokens.

Compute scales with loop count, not parameter count. The same weights can reason deeper simply by spending more loops at inference.

Variant: mythos_300m · Endpoint: https://xcom.dev/api/v1/mythos/v1/chat/completions

Checking…

mythos_300m spec

Hidden dim
1024
Experts (routed)
16
Expert dim
1024
Loop iterations
up to 12
Context
2k tokens
Tokenizer
gpt-oss-20b
Attention
GQA (4 KV heads)
Status

Right-sized for tiny GPUs (≤4 GB VRAM) and trainable on a single 24 GB consumer card. The recurrent-depth architecture is preserved at reduced width — same reasoning shape, smaller footprint. Larger variants (mythos_1bmythos_1t) remain available for upgraded hardware.

Leadership

The team behind the swarm.

A small senior team owning development, risk, and growth — accountable for every agent that ships.

Head of Development
P.W. Oldenburger

Architecture, agent runtime, and platform engineering. Owns the contracts model, the event bus, and the supervisor.

Head of Risk Management
M. Arvin

Policy, security posture, and operational risk. Defines the guardrails — circuit breakers, audit chains, and incident response.

Head of Marketing
J.H. Wells

Brand, narrative, and partner relations. Translates a technical platform into stories that travel.

Network

Subdomains & services.