Runtime Lifecycle
Status: Implemented. The runtime architecture is implemented in
packages/core/src/runtime.ts,packages/core/src/module-instance.ts, andpackages/core/src/registry.ts. The CLI supports multi-module mode — multiple projects can share a single daemon.orgloop startdetects a running daemon and registers additional modules via the control API. Daemon mode (--daemon) and supervised daemon mode (--daemon --supervised) are implemented.
Core Insight: Separate the Runtime from the Workload
Section titled “Core Insight: Separate the Runtime from the Workload”OrgLoop separates two concerns:
- Runtime infrastructure — the event bus, scheduler, logger fanout, checkpoint store, HTTP listener
- Workloads — the sources, routes, transforms, and actors that do actual work
The runtime is long-lived infrastructure. Workloads are the project’s configuration — sources, routes, transforms, and actors defined in YAML. The runtime owns the shared infrastructure; the project config defines what work flows through it.
Runtime Architecture
Section titled “Runtime Architecture”| Concept | What it is | Lifetime |
|---|---|---|
| Runtime | The OrgLoop process. Event bus, scheduler, logger fanout, HTTP control server. One per host. | Host uptime |
| Project (Module) | A directory with orgloop.yaml + package.json. Defines sources, routes, transforms, actors. Multiple projects can be loaded into one runtime. | Loaded dynamically |
+-----------------------------------------------------------------+| Runtime || || +----------+ +----------+ +------------+ +--------------+ || | EventBus | |Scheduler | |Logger Mgr | | HTTP Server | || | | | | | | | | || | shared | | shared | | shared | | control API | || +----------+ +----------+ +------------+ +--------------+ || || +----------------------------------------------------------+ || | Module: "engineering-org" | || | sources: github, linear, claude-code | || | routes: github-pr-review, linear-to-eng, cc-supervisor | || | actors: openclaw-engineering-agent | || +----------------------------------------------------------+ || || +----------------------------------------------------------+ || | Module: "ops-org" | || | sources: pagerduty | || | routes: oncall-to-responder | || | actors: openclaw-ops-agent | || +----------------------------------------------------------+ |+-----------------------------------------------------------------+Project Loading
Section titled “Project Loading”When orgloop start runs for the first time (no daemon running):
- Read
orgloop.yamland all referenced YAML files - Auto-discover routes from
routes/directory - Resolve environment variables (
${VAR}substitution) - Dynamically import connector/transform/logger packages from
node_modules/ - Create a
Runtimeinstance and start shared infrastructure (bus, scheduler, HTTP server) - Register a custom control handler (
module/load-project) so additional modules can be added later - Load the resolved project config via
runtime.loadModule()(internal API) - Register the module in
~/.orgloop/modules.jsonfor cross-command state tracking - Sources begin polling, routes are registered, actors are ready
When orgloop start runs and a daemon is already running:
- Detect the running daemon via PID file and process check
- Read
orgloop.yamland determine the module name and config path - POST to
http://127.0.0.1:<port>/control/module/load-projectwith{ configPath, projectDir } - The daemon’s custom handler resolves connectors, loads config, and calls
runtime.loadModule() - If a module with the same name is already loaded, it performs a hot-reload (unload + reload)
- Register the module in
~/.orgloop/modules.json
Each project is loaded as a ModuleInstance — modules share the runtime’s infrastructure (bus, scheduler, HTTP server) but own their own sources, actors, routes, and transforms. Events are routed within each module’s scope independently.
Runtime Modes
Section titled “Runtime Modes”Foreground (development)
Section titled “Foreground (development)”orgloop startRuns in the foreground. Ctrl+C sends SIGINT for graceful shutdown. Ideal for development and debugging — logs stream to the console.
Daemon (production)
Section titled “Daemon (production)”orgloop start --daemonForks to background. PID written to ~/.orgloop/orgloop.pid. Stdout/stderr redirected to ~/.orgloop/logs/daemon.stdout.log and daemon.stderr.log. Use orgloop stop to shut down.
Before forking, the daemon checks for an already-running instance via the PID file. If one is found, it registers the current project as an additional module into the running daemon via the control API.
Supervised Daemon (production, auto-restart)
Section titled “Supervised Daemon (production, auto-restart)”orgloop start --daemon --supervisedWraps the daemon in a Supervisor process that automatically restarts it on crash. Uses exponential backoff. Crash loop detection: if the process restarts more than 10 times within 5 minutes, the supervisor gives up.
The supervisor writes a heartbeat file (~/.orgloop/heartbeat) every 30 seconds with timestamp, PID, and uptime. This enables external monitoring tools to detect wedged processes.
Signal Handling
Section titled “Signal Handling”| Signal | Behavior |
|---|---|
SIGINT (Ctrl+C) | Graceful shutdown: flush loggers, save checkpoints, drain in-flight events, exit |
SIGTERM | Same as SIGINT — graceful shutdown |
uncaughtException | Log error, attempt graceful shutdown, exit with code 1 |
unhandledRejection | Log error, attempt graceful shutdown, exit with code 1 |
Graceful shutdown sequence:
- Stop source polling (finish current poll cycle)
- Drain in-flight events (deliver or timeout)
- Flush log buffers
- Save checkpoints to disk
- Clean up PID and port files
- Exit
Shutdown via Control API
Section titled “Shutdown via Control API”orgloop stop is module-aware. It determines which module the current directory owns:
- Last module: Shuts down the daemon entirely via
POST /control/shutdown(or SIGTERM fallback). - Multiple modules: Unloads only this module via
POST /control/module/unload— the daemon continues serving other modules. --allflag: Unconditionally shuts down the daemon (alias fororgloop shutdown).
orgloop shutdown unconditionally stops the daemon and all modules.
State Management
Section titled “State Management”~/.orgloop/├── orgloop.pid # Runtime PID├── runtime.port # HTTP listener port├── modules.json # Registered module state (name, dir, config, loadedAt)├── heartbeat # Supervisor health heartbeat├── state.json # Runtime state snapshot (sources, routes, actors)├── logs/│ ├── orgloop.log # Application logs│ ├── daemon.stdout.log # Daemon stdout│ └── daemon.stderr.log # Daemon stderr└── data/ ├── checkpoints/ # Per-source checkpoint files (default: <modulePath>/.orgloop/checkpoints/) ├── wal/ # Write-ahead log (event durability) └── queue/ # Queued events (degraded actors)Shared resources owned by the runtime:
- Event bus — events flow through the bus to the router
- Scheduler — manages poll intervals for all sources
- Logger fanout — distributes log entries to all configured loggers
- HTTP server — control API + webhook listener (localhost, default port 4800)
- WAL — write-ahead log for event durability
Per-project resources:
- Checkpoints — each source tracks its own position independently
- Queue — degraded actors store events locally until available
- State — project metadata snapshot
CLI Surface
Section titled “CLI Surface”# Runtime lifecycleorgloop start # Start in foreground (development)orgloop start --daemon # Start as background daemon (or register into running daemon)orgloop start --daemon --supervised # Start as supervised daemon (auto-restart)orgloop start --force # Skip doctor pre-flight checksorgloop stop # Stop this module (or daemon if last module)orgloop stop --all # Stop daemon and all modulesorgloop shutdown # Unconditionally stop daemon and all modulesorgloop status # Runtime health + all modules + source/route/actor summaryPre-flight checks. Before starting, orgloop start runs orgloop doctor checks. If critical errors are found, startup is blocked (use --force to bypass). If the environment is degraded (e.g., missing optional credentials), a warning is shown and startup proceeds.
Event Flow
Section titled “Event Flow”Source.poll() --> EventBus --> matchRoutes() --> Transform pipeline --> Actor.deliver() | actor.stopped --> EventBus (loops back)Events carry their source origin. The router matches events against all routes in the project. Multi-route matching is supported — one event can trigger multiple routes. Transform pipelines run sequentially per route.
Networking: Future Design Space
Section titled “Networking: Future Design Space”The runtime architecture is designed with a networked future in mind, but explicitly defers building it.
The BEAM analogy. In Erlang/OTP, the VM hosts many applications. Each application is a supervision tree of processes. The VM can join a cluster — processes become location-transparent, addressable by name regardless of which node hosts them. The runtime handles routing; the applications don’t know or care.
How this maps to OrgLoop:
| BEAM concept | OrgLoop equivalent |
|---|---|
| VM (node) | Runtime |
| Application | Project workload |
| Process | Source / Route / Actor |
| Distributed Erlang | Networked runtime (future) |
| Process registry | Internal registry (ModuleRegistry) |
What we design for now:
- Project names are globally meaningful (not just host-local)
- Events carry source origin metadata
- The internal registry interface doesn’t assume locality (could back onto a distributed store)
What we explicitly defer:
- Multi-host runtime clustering
- Cross-host workload placement / scheduling
- Distributed event bus (Tier 2/3 from Scale Design)
- Workload migration (moving a running project between hosts)
- Consensus / split-brain handling
Multi-project runtime (implemented). The CLI now supports loading multiple projects into a single runtime. Running orgloop start from different project directories registers each project as a separate module in the shared daemon. Each module has independent sources, routes, transforms, and actors, but shares the runtime infrastructure (bus, scheduler, HTTP server). Module state is tracked in ~/.orgloop/modules.json. orgloop stop is module-aware — it unloads only the current directory’s module unless it’s the last one.
Hot Reload (Future)
Section titled “Hot Reload (Future)”When a project’s config changes, the runtime could reload it without stopping. The sequence:
- Load new config alongside old
- Diff: which sources/routes/actors changed?
- Remove old routes, add new routes
- For changed sources: flush checkpoint, reinit with new config
- For unchanged sources: keep polling (no gap)
This is deferred. Currently, config changes require orgloop stop + orgloop start.
Relationship to Existing Spec
Section titled “Relationship to Existing Spec”| Spec section | How this relates |
|---|---|
| Project Model | The project model defines the config structure. This spec defines how that config is loaded and managed at runtime. |
| Runtime Modes | CLI/library/server modes are the interface to the runtime. This spec defines the runtime’s internal architecture. |
| Scale Design | Tier 1/2/3 scaling applies to the event bus and delivery fleet within the runtime. This spec is orthogonal — it’s about runtime lifecycle, not event throughput. |
| Scope Boundaries | OrgLoop still doesn’t install software or broker credentials. The runtime is still just the routing layer — now with explicit lifecycle management. |