A looped transformer with sparse MoE and switchable MLA/GQA attention. Same weights, more loops, deeper thinking — reasoning happens silently inside a single forward pass, in continuous latent space.
Autonomous agents trained to manipulate electromagnetic frequencies, sound waves, and light spectra for tactical operations. Direct hardware integration with signal processing equipment.
Access pathways:
HackRF — RF emission/capture for signal analysis and generationELECTRA — Powerline data exfiltration for off-grid coordinationAgents operate autonomously once deployed. Reasoning depth scales with operational complexity. Zero external dependencies post-initialization.
Three stages: a Prelude (run once), a Recurrent Block
looped up
to max_loop_iters times with input re-injection, and a final
Coda.
Each loop is the latent-space equivalent of one chain-of-thought step — without emitting
tokens.
Compute scales with loop count, not parameter count. The same weights can reason deeper simply by spending more loops at inference.
Variant: mythos_300m · Endpoint:
https://xcom.dev/api/v1/mythos/v1/chat/completions
Right-sized for tiny GPUs (≤4 GB VRAM) and trainable on a single 24 GB consumer card. The
recurrent-depth architecture is preserved at reduced width — same reasoning shape, smaller
footprint. Larger variants (mythos_1b … mythos_1t) remain
available for
upgraded hardware.