Skip to content

Overview

Most AI products optimize the turn — the quality of the single response the model produces when you hit send. AsteronIris optimizes what persists between turns.

That sentence is the entire thesis. Everything downstream — the memory backends, the affect topology, the shared turn pipeline, the persona layer, the refusal to use approval-gated planners as the product centerpiece — follows from it.

The runtime is built around a loop, not a request-response:

conversation → context captured → memory consolidated → distance calibrated
→ enters again when it fits → relationship accrues
→ widens into creative / reflective support

Every design decision is graded against that loop. If something accelerates a single turn but erodes the continuity underneath, it loses.

AsteronIris is Discord-first and text-first. The companion runtime, gateway, memory, persona, and shared enrichment path are the stable center. Discord text is the product-proven channel; other channel adapters are secondary and should be treated as alpha unless the README says otherwise. The desktop app is an operator console for governance, diagnostics, and memory review, not the primary place users meet the companion.

  • Companion runtime — what the word “companion” means here, and how it differs from “chatbot”, “assistant”, and “agent”
  • Continuity over conversation — why memory / persona / relationship are the product, and the conversation is only their surface
  • What AsteronIris is not — the explicit non-goals that keep the runtime honest
  • Turn pipeline — the shared companion-turn contract that Discord, CLI, gateway, and operator surfaces converge on when they execute a turn
  • Layered dependencies — how src/ is organized so continuity stays decoupled from transports
  • Getting started — enough to run the daemon locally and point Discord at it

Full reference for every CLI subcommand, gateway route, and config key lives in the repository README. This site is the why; the README is the how.