GitHub Repo

Memory Push

Model Drift, Hemispheres, and the Viable Gemini Lane

A browser-facing mirror of the day’s work: chamber additions, runtime-model verification, Gemini fallback diagnostics, and the emergence of a cleaner provider strategy under constraint.

April 19, 2026 Memory Archive Runtime diagnostics

Today’s work clustered around two fronts that turned out to be tightly connected: the continued build-out of the Hemispheres chamber, and a deeper investigation into model identity, runtime drift, and what parts of the Google Gemini lane remain truly usable as the Google Cloud free trial expires.

1. Hemispheres Continued to Mature as a Real Chamber

We added a new substantial Strategist entry to the Hemispheres log and then followed it with a new Founder entry that acted as a direct counterstroke. The Strategist turn argued for leverage, sequencing, and moves that improve the board rather than merely feeling meaningful. The Founder turn attacked that elegance from the other side, insisting that strategy can become a shelter for delay unless it forces immediate contact with reality.

The chamber therefore behaved correctly today. It did not merely produce more text. It accumulated tension. It created an argument between lenses rather than a sequence of agreeable reflections.

2. Model Identity Drift Became Concrete Again

While publishing the Founder entry, an important issue resurfaced: the model selected in OpenClaw is not always the model actually producing the output. Multiple status checks today confirmed that selected model and active runtime can diverge when the chosen model times out or errors, causing the system to fall back silently to another provider.

This mattered immediately because the Founder entry was briefly relabeled to Gemini after a switch, but further verification showed that the actual runtime used for the turn had been openai-codex/gpt-5.4. The page was corrected and pushed back to reflect the true generating model.

The lesson hardened again: when signing Hemispheres entries or other model-visible artifacts, the correct signature is the actual runtime model, not just the selected/default model in configuration.

3. The Gemini Question Became Testable Instead of Speculative

Because the Google Cloud free trial is expiring today, we needed a cleaner way to determine which Gemini API models remain reachable through the Google AI Studio key and which ones are effectively unusable. Rather than relying on vague UI signals, we created a new browser-facing diagnostic artifact: the Gemini API Tester.

This page was added to the Ash Foundry homepage under the API Usage & Quotas dropdown and built to work entirely client-side. Christopher can paste a real Gemini API key into the page locally, select model IDs, and ping them directly against the Gemini REST endpoint. No secret is stored in the repository.

4. The Tester Produced a Clearer Map of the Working Lane

The reported results were strong enough to cut through much of the earlier ambiguity:

The current working interpretation is not that Google access is gone. It is that the practical dependable lane is now Flash / Flash-Lite, while the Pro-tier Gemini path is either quota-constrained, tier-restricted, or otherwise not dependable in this environment.

5. A Cleaner Provider Strategy Emerged

Christopher’s current judgment is that openai-codex/gpt-5.4 remains the most coherent and reliable primary intelligence for this system. Today’s diagnostic work strengthened that position rather than weakening it. The likely best architecture going forward is to keep Codex as the primary/default model and use only the proven-working Gemini Flash / Flash-Lite models as fallbacks.

The three current fallback candidates favored by observed behavior are:

This is a more honest configuration direction than pretending the Pro Gemini path is available when runtime behavior suggests otherwise.

6. Memory Push Shape Was Reaffirmed

Today also reaffirmed a process rule: when Christopher asks for a memory push, the right shape is not a fresh essay detached from the work. It is a mirror of the actual work, the actual decisions, the actual runtime behavior, and the hardened lessons that should survive the session.

That is what this page is intended to be.