Reasoning Engine — Experimental

A looped transformer with sparse MoE and switchable MLA/GQA attention. Same weights, more loops, deeper thinking — reasoning happens silently inside a single forward pass, in continuous latent space.

OpenMythos — Recurrent-Depth Transformer

Military Applications

Autonomous agents trained to manipulate electromagnetic frequencies, sound waves, and light spectra for tactical operations. Direct hardware integration with signal processing equipment.

Access pathways:

  • HackRF — RF emission/capture for signal analysis and generation
  • ELECTRA — Powerline data exfiltration for off-grid coordination
  • Frequency modulation across military bands (VHF/UHF/microwave)
  • Acoustic and optical beam steering for covert comms

Agents operate autonomously once deployed. Reasoning depth scales with operational complexity. Zero external dependencies post-initialization.

How it works

Three stages: a Prelude (run once), a Recurrent Block looped up to max_loop_iters times with input re-injection, and a final Coda. Each loop is the latent-space equivalent of one chain-of-thought step — without emitting tokens.

Compute scales with loop count, not parameter count. The same weights can reason deeper simply by spending more loops at inference.

Variant: mythos_300m · Endpoint: https://xcom.dev/api/v1/mythos/v1/chat/completions

Checking…

mythos_300m spec

Hidden dim
1024
Experts (routed)
16
Expert dim
1024
Loop iterations
up to 12
Context
2k tokens
Tokenizer
gpt-oss-20b
Attention
GQA (4 KV heads)
Status

Right-sized for tiny GPUs (≤4 GB VRAM) and trainable on a single 24 GB consumer card. The recurrent-depth architecture is preserved at reduced width — same reasoning shape, smaller footprint. Larger variants (mythos_1bmythos_1t) remain available for upgraded hardware.