Skip to content

Runtime Lifecycle

Status: Implemented. The runtime architecture is implemented in packages/core/src/runtime.ts, packages/core/src/module-instance.ts, and packages/core/src/registry.ts. The CLI supports multi-module mode — multiple projects can share a single daemon. orgloop start detects a running daemon and registers additional modules via the control API. Daemon mode (--daemon) and supervised daemon mode (--daemon --supervised) are implemented.

Core Insight: Separate the Runtime from the Workload

Section titled “Core Insight: Separate the Runtime from the Workload”

OrgLoop separates two concerns:

  1. Runtime infrastructure — the event bus, scheduler, logger fanout, checkpoint store, HTTP listener
  2. Workloads — the sources, routes, transforms, and actors that do actual work

The runtime is long-lived infrastructure. Workloads are the project’s configuration — sources, routes, transforms, and actors defined in YAML. The runtime owns the shared infrastructure; the project config defines what work flows through it.

ConceptWhat it isLifetime
RuntimeThe OrgLoop process. Event bus, scheduler, logger fanout, HTTP control server. One per host.Host uptime
Project (Module)A directory with orgloop.yaml + package.json. Defines sources, routes, transforms, actors. Multiple projects can be loaded into one runtime.Loaded dynamically
+-----------------------------------------------------------------+
| Runtime |
| |
| +----------+ +----------+ +------------+ +--------------+ |
| | EventBus | |Scheduler | |Logger Mgr | | HTTP Server | |
| | | | | | | | | |
| | shared | | shared | | shared | | control API | |
| +----------+ +----------+ +------------+ +--------------+ |
| |
| +----------------------------------------------------------+ |
| | Module: "engineering-org" | |
| | sources: github, linear, claude-code | |
| | routes: github-pr-review, linear-to-eng, cc-supervisor | |
| | actors: openclaw-engineering-agent | |
| +----------------------------------------------------------+ |
| |
| +----------------------------------------------------------+ |
| | Module: "ops-org" | |
| | sources: pagerduty | |
| | routes: oncall-to-responder | |
| | actors: openclaw-ops-agent | |
| +----------------------------------------------------------+ |
+-----------------------------------------------------------------+

When orgloop start runs for the first time (no daemon running):

  1. Read orgloop.yaml and all referenced YAML files
  2. Auto-discover routes from routes/ directory
  3. Resolve environment variables (${VAR} substitution)
  4. Dynamically import connector/transform/logger packages from node_modules/
  5. Create a Runtime instance and start shared infrastructure (bus, scheduler, HTTP server)
  6. Register a custom control handler (module/load-project) so additional modules can be added later
  7. Load the resolved project config via runtime.loadModule() (internal API)
  8. Register the module in ~/.orgloop/modules.json for cross-command state tracking
  9. Sources begin polling, routes are registered, actors are ready

When orgloop start runs and a daemon is already running:

  1. Detect the running daemon via PID file and process check
  2. Read orgloop.yaml and determine the module name and config path
  3. POST to http://127.0.0.1:<port>/control/module/load-project with { configPath, projectDir }
  4. The daemon’s custom handler resolves connectors, loads config, and calls runtime.loadModule()
  5. If a module with the same name is already loaded, it performs a hot-reload (unload + reload)
  6. Register the module in ~/.orgloop/modules.json

Each project is loaded as a ModuleInstance — modules share the runtime’s infrastructure (bus, scheduler, HTTP server) but own their own sources, actors, routes, and transforms. Events are routed within each module’s scope independently.

Terminal window
orgloop start

Runs in the foreground. Ctrl+C sends SIGINT for graceful shutdown. Ideal for development and debugging — logs stream to the console.

Terminal window
orgloop start --daemon

Forks to background. PID written to ~/.orgloop/orgloop.pid. Stdout/stderr redirected to ~/.orgloop/logs/daemon.stdout.log and daemon.stderr.log. Use orgloop stop to shut down.

Before forking, the daemon checks for an already-running instance via the PID file. If one is found, it registers the current project as an additional module into the running daemon via the control API.

Supervised Daemon (production, auto-restart)

Section titled “Supervised Daemon (production, auto-restart)”
Terminal window
orgloop start --daemon --supervised

Wraps the daemon in a Supervisor process that automatically restarts it on crash. Uses exponential backoff. Crash loop detection: if the process restarts more than 10 times within 5 minutes, the supervisor gives up.

The supervisor writes a heartbeat file (~/.orgloop/heartbeat) every 30 seconds with timestamp, PID, and uptime. This enables external monitoring tools to detect wedged processes.

SignalBehavior
SIGINT (Ctrl+C)Graceful shutdown: flush loggers, save checkpoints, drain in-flight events, exit
SIGTERMSame as SIGINT — graceful shutdown
uncaughtExceptionLog error, attempt graceful shutdown, exit with code 1
unhandledRejectionLog error, attempt graceful shutdown, exit with code 1

Graceful shutdown sequence:

  1. Stop source polling (finish current poll cycle)
  2. Drain in-flight events (deliver or timeout)
  3. Flush log buffers
  4. Save checkpoints to disk
  5. Clean up PID and port files
  6. Exit

orgloop stop is module-aware. It determines which module the current directory owns:

  • Last module: Shuts down the daemon entirely via POST /control/shutdown (or SIGTERM fallback).
  • Multiple modules: Unloads only this module via POST /control/module/unload — the daemon continues serving other modules.
  • --all flag: Unconditionally shuts down the daemon (alias for orgloop shutdown).

orgloop shutdown unconditionally stops the daemon and all modules.

~/.orgloop/
├── orgloop.pid # Runtime PID
├── runtime.port # HTTP listener port
├── modules.json # Registered module state (name, dir, config, loadedAt)
├── heartbeat # Supervisor health heartbeat
├── state.json # Runtime state snapshot (sources, routes, actors)
├── logs/
│ ├── orgloop.log # Application logs
│ ├── daemon.stdout.log # Daemon stdout
│ └── daemon.stderr.log # Daemon stderr
└── data/
├── checkpoints/ # Per-source checkpoint files (default: <modulePath>/.orgloop/checkpoints/)
├── wal/ # Write-ahead log (event durability)
└── queue/ # Queued events (degraded actors)

Shared resources owned by the runtime:

  • Event bus — events flow through the bus to the router
  • Scheduler — manages poll intervals for all sources
  • Logger fanout — distributes log entries to all configured loggers
  • HTTP server — control API + webhook listener (localhost, default port 4800)
  • WAL — write-ahead log for event durability

Per-project resources:

  • Checkpoints — each source tracks its own position independently
  • Queue — degraded actors store events locally until available
  • State — project metadata snapshot
Terminal window
# Runtime lifecycle
orgloop start # Start in foreground (development)
orgloop start --daemon # Start as background daemon (or register into running daemon)
orgloop start --daemon --supervised # Start as supervised daemon (auto-restart)
orgloop start --force # Skip doctor pre-flight checks
orgloop stop # Stop this module (or daemon if last module)
orgloop stop --all # Stop daemon and all modules
orgloop shutdown # Unconditionally stop daemon and all modules
orgloop status # Runtime health + all modules + source/route/actor summary

Pre-flight checks. Before starting, orgloop start runs orgloop doctor checks. If critical errors are found, startup is blocked (use --force to bypass). If the environment is degraded (e.g., missing optional credentials), a warning is shown and startup proceeds.

Source.poll() --> EventBus --> matchRoutes() --> Transform pipeline --> Actor.deliver()
|
actor.stopped --> EventBus (loops back)

Events carry their source origin. The router matches events against all routes in the project. Multi-route matching is supported — one event can trigger multiple routes. Transform pipelines run sequentially per route.

The runtime architecture is designed with a networked future in mind, but explicitly defers building it.

The BEAM analogy. In Erlang/OTP, the VM hosts many applications. Each application is a supervision tree of processes. The VM can join a cluster — processes become location-transparent, addressable by name regardless of which node hosts them. The runtime handles routing; the applications don’t know or care.

How this maps to OrgLoop:

BEAM conceptOrgLoop equivalent
VM (node)Runtime
ApplicationProject workload
ProcessSource / Route / Actor
Distributed ErlangNetworked runtime (future)
Process registryInternal registry (ModuleRegistry)

What we design for now:

  • Project names are globally meaningful (not just host-local)
  • Events carry source origin metadata
  • The internal registry interface doesn’t assume locality (could back onto a distributed store)

What we explicitly defer:

  • Multi-host runtime clustering
  • Cross-host workload placement / scheduling
  • Distributed event bus (Tier 2/3 from Scale Design)
  • Workload migration (moving a running project between hosts)
  • Consensus / split-brain handling

Multi-project runtime (implemented). The CLI now supports loading multiple projects into a single runtime. Running orgloop start from different project directories registers each project as a separate module in the shared daemon. Each module has independent sources, routes, transforms, and actors, but shares the runtime infrastructure (bus, scheduler, HTTP server). Module state is tracked in ~/.orgloop/modules.json. orgloop stop is module-aware — it unloads only the current directory’s module unless it’s the last one.

When a project’s config changes, the runtime could reload it without stopping. The sequence:

  1. Load new config alongside old
  2. Diff: which sources/routes/actors changed?
  3. Remove old routes, add new routes
  4. For changed sources: flush checkpoint, reinit with new config
  5. For unchanged sources: keep polling (no gap)

This is deferred. Currently, config changes require orgloop stop + orgloop start.

Spec sectionHow this relates
Project ModelThe project model defines the config structure. This spec defines how that config is loaded and managed at runtime.
Runtime ModesCLI/library/server modes are the interface to the runtime. This spec defines the runtime’s internal architecture.
Scale DesignTier 1/2/3 scaling applies to the event bus and delivery fleet within the runtime. This spec is orthogonal — it’s about runtime lifecycle, not event throughput.
Scope BoundariesOrgLoop still doesn’t install software or broker credentials. The runtime is still just the routing layer — now with explicit lifecycle management.