OpenMythos: The Open-Source Reconstruction of Anthropic’s “Silent Reasoning” Architecture, & Yes Mozilla Was the First To Use it

amy 25/04/2026

If you have been following the AI landscape lately, you have probably heard the whispers: Anthropic’s Mythos Preview is changing the game. But here is the catch, it is not publicly available. It is locked behind limited access, NDAs, and enterprise gates.

That is where OpenMythos enters the conversation.

Created by open-source developer Kai Gomez, OpenMythos is a community-driven, PyTorch-based reconstruction of what researchers believe Anthropic’s Mythos architecture actually does under the hood. No leaks. No stolen weights. Just first-principles engineering, peer-reviewed research, and a whole lot of curiosity.

As someone who has spent years building AI tools for healthcare and privacy-focused applications, I am always skeptical of “black box” models. But when an open-source project dares to reverse-engineer one of the most intriguing architectures in AI, while staying transparent, MIT-licensed, and installable via pip—it demands attention.

Let’s break down what OpenMythos is, how it works, and why it might just reshape how we think about reasoning models.

What Is OpenMythos?

OpenMythos is not Anthropic’s Mythos. Let’s be crystal clear about that.

It is a theoretical reconstruction, a clean-room implementation built entirely from publicly available research, academic papers, and informed speculation about how Anthropic’s rumored “looped transformer” architecture might function.

Think of it as an open-source blueprint for a new class of AI models: recurrent depth transformers that reason through iterative computation, not visible chain-of-thought tokens.

Key facts:

  • Architecture: Looped transformer with Mixture of Experts (MoE)
  • Reasoning: Silent, iterative computation in latent space
  • License: MIT (fully open-source)
  • Creator: Kai Gomez, prolific open-source contributor
  • Status: 630+ GitHub stars, 100+ forks, pure PyTorch
  • Not affiliated with Anthropic in any way

What Does It Do?

OpenMythos aims to replicate the core innovation attributed to Anthropic’s Mythos Preview: deep reasoning without generating intermediate tokens.

Most reasoning models today, like OpenAI’s O1 or DeepSeek R1, work by generating thousands of “thinking” tokens that you can actually see. That is the “chain of thought” approach. It works, but it is verbose, expensive, and exposes the model’s reasoning to manipulation.

OpenMythos takes a different path:

The model thinks silently. It loops through the same computation block multiple times, refining its internal representation with each pass, until it converges on an answer. No intermediate tokens. No visible reasoning chain. Just latent-space computation.

The result? Potentially faster inference, lower token costs, and reasoning that is harder to game because the “thought process” never leaves the model’s hidden states.


How Does It Work? The Direct Workflow

Here is where OpenMythos gets exciting for developers. The architecture is elegant, modular, and critically, configurable at inference time.

The Three-Stage Architecture

  1. Prelude: Standard transformer layers that encode your input into the model’s representation space. Runs once.
  2. Coda: Final standard transformer layers that project the refined hidden state back to output space.

Recurrent Block: The innovation. A set of transformer layers that share weights and loop T times. Each iteration refines the hidden state using a mathematically stable update rule:

Hₜ₊₁ = A·Hₜ + B·E + Transformer(Hₜ, E)

Where E is the input encoding, injected at every loop to prevent drift.

Key Technical Mechanisms

Mechanism Purpose Why It Matters
Loop Index Positional Embeddings Lets the model know which iteration it is on Enables depth-aware reasoning strategies
LoRA-based Parameter Adaptation Allows shared weights to specialize slightly per loop Balances efficiency with flexibility
Mixture of Experts (MoE) Routes tokens to relevant experts; ~5% activation Massive parameter count with manageable inference cost
Adaptive Computation Time (ACT) Learns when to stop looping based on input complexity Simple queries halt early; complex problems get more compute
LTI Constraint Injection Guarantees spectral radius < 1 for stability Prevents “residual explosion” in deep loops

The Direct Developer Workflow

# 1. Install
pip install open-mythos

# 2. Import and configure
from open_mythos import Model, Config

config = Config(
    vocab_size=32000,
    dim=1024,
    n_heads=16,
    seq_len=4096,
    max_loops=16,  # Configurable at inference!
    act_threshold=0.9,
    moe_experts=8,
    routing_top_k=2
)

# 3. Create and run
model = Model(config)
output = model.forward(input_tokens)

Pro tip: You can adjust max_loops at inference time without retraining. Need deeper reasoning for a complex medical case? Bump it up. Handling a simple query? Let ACT handle it automatically.s

Features That Matter

  • Silent Reasoning: No visible chain-of-thought tokens—computation happens entirely in latent space.
  • Adaptive Depth: The model automatically allocates more compute to harder problems via Adaptive Computation Time.
  • Mixture of Experts: ~5% parameter activation enables massive knowledge capacity with efficient inference.
  • Stability by Design: LTI constraints prevent numerical explosion in deep loops—no more “garbage after 20 iterations.”
  • Configurable at Runtime: Change loop count, expert routing, or attention type without retraining.
  • Pure PyTorch + MIT License: No proprietary dependencies. Audit it, fork it, deploy it locally.
  • Variable-Depth Batching: Run different samples with different loop counts in the same batch, critical for production efficiency.

Commercial Alternatives (For Context)

Solution Access Model Reasoning Style Privacy/Deployment
Anthropic Mythos Preview Limited enterprise access (Project Glasswing) Silent iterative reasoning Cloud-only, NDAs required
OpenAI Cybersecurity Models Private beta, enterprise focus Hybrid CoT + latent reasoning Cloud-first, API-based
Standard CoT Models (O1, R1) Widely available via API Visible chain-of-thought tokens Cloud-dependent, token-costly
OpenMythos ✅ Fully open-source, MIT Silent iterative reasoning ✅ Local, self-hosted, auditable

How Mozilla Is Using Mythos (And What It Means for Us)

Here is the real-world proof point: Mozilla used Anthropic’s Mythos Preview to find and fix 271 vulnerabilities in Firefox 150.

According to Firefox CTO Bobby Holley, emerging AI capabilities like Mythos have “changed things dramatically” because they can now cover “the full space of vulnerability-inducing bugs”, categories that were previously only findable through expensive human analysis.

“Every piece of software is going to have to make this transition, because every piece of software has a lot of bugs buried underneath the surface that are now discoverable.”
— Bobby Holley, Firefox CTO

Mozilla CTO Raffi Krikorian added a crucial caveat in a New York Times op-ed: the economics of open-source software have not changed. The most critical infrastructure is still maintained by volunteers, while well-resourced organizations get early access to powerful new tools.

What This Means for Healthcare AI

As a physician and developer building privacy-first healthcare tools, this hits close to home.

The opportunity: Imagine an OpenMythos-style model running locally in your hospital’s infrastructure, silently analyzing:

  • Clinical notes for potential diagnostic oversights
  • Medication lists for dangerous interactions
  • EHR workflows for systemic vulnerabilities
  • Research protocols for ethical or methodological gaps

Because OpenMythos reasons in latent space, it could flag risks without generating verbose, PHI-exposing intermediate outputs.

The caution:

  • Never upload sensitive patient data to public AI APIs
  • Prefer local, open-source deployments where you control the weights and the data flow
  • Ensure auditability: even “silent” reasoning should be traceable for regulatory compliance (HIPAA, GDPR, etc.)

The path forward: Projects like OpenMythos give us a blueprint for building sovereign AI, models that are powerful, transparent, and deployable where privacy matters most. That is not just a technical win; it is an ethical imperative.

Final Note: Why Open Reconstruction Matters

OpenMythos is more than a technical curiosity. It is a statement.

In an era where the most powerful AI models are locked behind corporate gates, open reconstruction projects remind us that understanding is a public good. They force transparency. They enable scrutiny. They let communities build tools that align with their values, whether that is privacy, local deployment, or regulatory compliance.

Kai Gomez did not just publish code. He published a hypothesis: “This is how we think Mythos works. Test it. Break it. Improve it.”

That is the open-source ethos at its best.

Will OpenMythos match Anthropic’s performance tomorrow? Probably not. But does it give researchers, developers, and privacy advocates a foundation to build the next generation of reasoning models—on their own terms?

Absolutely.

If you are building AI tools for healthcare, cybersecurity, or any domain where transparency and control matter, keep an eye on this project. Clone the repo. Read the math. Try the workflow.

And if you do, drop a comment below. I would love to hear what you build.

Disclaimer: OpenMythos is an independent research project and is not affiliated with, endorsed by, or connected to Anthropic or its Mythos Preview model.