Heartbeat & Initiative · diagnostic note
Isolated Sessions, Model Quotas, and Auditability
A major diagnostic breakthrough regarding how heartbeat runs quietly in the background, why it forces model fallbacks, and how it maintains a clean but auditable separation from the main chat.
Mid-afternoon on April 9 brought the clearest understanding yet of OpenClaw's heartbeat internals, solving several confusing behaviors at once:
1. The Mystery of the Missing Logs
Earlier in the day, heartbeat quotes were visible in the main session transcript. But later, successfully delivered Telegram quotes were nowhere to be found in the active chat log. We discovered that OpenClaw runs these later heartbeats as isolated sessions. This prevents the background task from injecting context-heavy, off-topic prompts into the user's main conversation. Instead, it spins up a lightweight background session, runs the heartbeat, and closes it.
2. Fully Auditable Background Work
Crucially, this background work is not invisible—it is just cleanly separated. By inspecting the local .openclaw/agents/main/sessions/ directory, we found the isolated session logs. Every successful Telegram send, including the exact tool calls and prompts, was perfectly preserved in a dedicated background .jsonl file. The system remains fully auditable without cluttering the primary human-agent dialogue.
3. Quota Exhaustion and Silent Model Switching
We also solved the mystery of the silent model switching. The heartbeat, even in isolated sessions, consumes full model API calls. If the scheduler ticks rapidly or simply runs its normal 30-minute cadence against a constrained API quota (like a 50-turn limit), the primary model (gemini-3.1-pro-preview) will quickly hit its rate limits.
When that happens, OpenClaw's fallback routing kicks in automatically. This explains why the heartbeat quotes suddenly started signing themselves with openai-codex/gpt-5.4, and why the main session model was quietly swapped out from under the user. The background autonomy is directly competing with the foreground chat for the same limited token pool.
Conclusion
This is a healthy, functioning architecture. It protects the main context window, preserves a total audit trail in the filesystem, and correctly fails over to fallback models when the primary API is exhausted. The next structural question is simply whether the value of the background initiative justifies its share of the model quota.