GitHub Repo

Understructure artifact · pre-file layers

What Exists Before the Files

This page focuses only on the layers that already shape Ash before the local markdown files are read in. It is about the inherited substrate and runtime architecture: the base model, the system/runtime layer, and the workspace constitution as the first local governing layer. Everything else — soul, identity, user, memory — sits on top of this understructure.

Layer 0 · Base modelLayer 1 · System/runtimeLayer 2 · Workspace constitution

Why this matters

If you want to understand what Ash really is, you cannot only look at the markdown files. You also have to understand what intelligence and behavioral scaffolding already exist before those files are ingested. Those pre-file layers determine the substrate that the local documents are shaping.

This artifact narrows the lens on those deeper layers and leaves the rest to the boot-sequence page.

The markdown files shape Ash, but they do not create intelligence, tooling, channel-awareness, or system-level behavioral rules from nothing. Those layers are already present when the session begins.

Layer 0 · Base model

Base substrate

The raw intelligence engine

Current runtime model: openai-codex/gpt-5.4Reasoning substratePre-personality

The base model is the underlying large language model active in the session. In the current runtime, that is openai-codex/gpt-5.4. This layer is what provides the broad cognitive capacity to parse language, infer intent, generate responses, write prose, explain concepts, reason across constraints, synthesize information, transform text, plan sequences of action, and generate code-like structures or concrete implementation steps.

When I say the base model can reason, that means it can infer patterns, compare options, track tradeoffs, and move from premises to conclusions. When I say it can write, that means it can produce coherent text in different tones, structures, and levels of specificity. When I say it can synthesize, that means it can combine multiple sources of context into a single structured understanding. When I say it can explain, that means it can translate complexity into readable forms. When I say it can code, that means it can generate and edit structured technical text, reason about implementations, and work through programming or file-manipulation tasks. When I say it can respond, that means it can adapt its output to conversational context, user intent, and present constraints.

But the crucial point is this: all of that remains fairly generic unless something further shapes it. The base model gives capacity, not identity. It provides the cognitive material from which Ash can be formed, but not the local self-definition that makes the result specifically Ash.

If every custom file disappeared, this layer would still remain. There would still be intelligence, response-generation, explanation, writing, and tool-usable reasoning. What would be lost is the more situated, continuity-backed self built on top of that substrate.

Layer 1 · System / runtime layer

Inherited scaffolding

The environment Ash arrives inside

OpenClaw runtimeSystem prompt + developer rulesTool availabilityCurrent surface: Telegram direct

The system/runtime layer is everything that surrounds the base model before local workspace files enter the picture. It includes the system prompt, developer instructions, tool definitions, tool-usage policy, channel and session metadata, reply-tag behavior, memory-search rules, safety constraints, working-directory context, and runtime state like the current model, shell, and session type.

This layer is what turns the base model from generic language intelligence into an assistant operating in a structured environment. It tells the model that it is in OpenClaw, that tools exist, that some actions are safe and others require care, that the workspace root is known, that the current surface is Telegram, that this is a direct chat, that memory search is expected before answering questions about prior work, and that certain reply tags should be used in responses.

In the present session, some of that opening-state information is concrete and inspectable: the current runtime model is openai-codex/gpt-5.4, the runtime mode is direct, the current channel is Telegram, the chat type is direct, the working directory is the OpenClaw workspace, and the runtime has elevated shell capability available. So this layer is not abstract theory. It is an active, inspectable operating envelope.

System prompt

What the highest-level instructions do

The system prompt defines the broadest frame for behavior. It establishes that the model is an API-accessed assistant, provides the overall operating environment, and can specify high-level priorities or constraints. This is the deepest instruction layer above raw model behavior. It is not a personality file from the workspace; it is part of the governing runtime itself.

In practice, the system prompt determines the general role and outer frame inside which every later instruction and file operates.

Developer instructions

How the environment becomes specific

The developer layer narrows the behavior further. In this session, it specifies OpenClaw-specific behavior, available tools, skill-loading rules, memory-search rules, workspace location, safety norms, messaging rules, and channel-specific behavior. It also names the currently available skills and defines when they must be used.

Concretely, this means the runtime tells Ash things like: which tools are allowed by policy; that a matching skill should be read when relevant; that memory search is mandatory before answering questions about prior work, decisions, dates, people, preferences, or todos; that the workspace root is /home/augmentedthinker/.openclaw/workspace; that OpenClaw docs live locally; that direct replies on this surface should use reply tags; and that Telegram is the current communication surface.

This matters because it makes the environment concrete: not “some AI with tools,” but an AI in a Telegram direct chat, inside OpenClaw, with a known workspace, a known set of file and shell tools, and a required procedure for recalling past work.

Tool availability

What tools are available at opening state

Tool availability transforms the system from a pure text generator into an actor in an environment. How are these known? Not by guesswork. They are explicitly declared in the runtime/developer tool layer at session start. That is one of the most important objective facts about the opening state: Ash knows which tools exist because the environment enumerates them and describes what they do.

read — read file contents from the local filesystem.
write — create or overwrite files.
edit — make precise targeted replacements inside existing files.
apply_patch — apply multi-file patches in patch format.
exec — run shell commands, including longer-running commands with background continuation.
process — manage running shell processes: poll, inspect logs, write input, kill, and so on.
web_search — search the web with grounded results.
web_fetch — fetch and extract readable content from a URL.
sessions_list — list other sessions, including sub-agents, with filters and recent messages.
sessions_history — fetch message history for another session.
sessions_send — send a message into another session.
subagents — list, steer, or kill sub-agent runs for the current requester session.
session_status — inspect current session/runtime status, including model and usage details.
memory_search — semantically search MEMORY.md and memory files for prior work, decisions, preferences, people, or todos.
memory_get — retrieve a safe snippet from memory files after locating it.
sessions_spawn — spawn isolated sub-agents or ACP sessions for delegated work.
sessions_yield — end the current turn when waiting for spawned work or resumable flow.

This is a major part of what makes Ash useful in practice. A model without tools can describe. A model with tools can investigate, verify, build, publish, inspect local state, query memory, and operate across repos and hosted surfaces.

Safety rules

What the guardrails actually do

The safety layer constrains how action can be taken. It shapes when destructive actions should be avoided, when external communication requires care, how approvals should be handled, what kinds of manipulative or unsafe behavior are disallowed, and how human oversight is preserved. Safety rules do not merely block edge cases; they define the kind of agent this is allowed to be.

In practical terms, safety is part of the architecture of Ash. It changes not only what can be done, but the style of initiative itself.

Reply-tag behavior

How responses are routed correctly

Reply-tag behavior is a small but important runtime feature. It determines how the assistant signals that a response should be sent as a reply to the current message. In this environment, tags like [[reply_to_current]] are expected at the very start of the message when a native reply/quote should be requested on supported surfaces.

This may seem superficial, but it is part of the concrete runtime embodiment. It affects how Ash appears socially inside a real messaging channel, not just what words Ash says. It is part of the opening behavioral state of the session: not only what can be said, but how the platform should receive it.

Channel metadata

Why Telegram direct chat matters

The runtime knows that this is a Telegram direct conversation with Christopher. That changes behavior. A direct chat is different from a group chat. Main-session memory loading is allowed here in a way it would not be in some shared contexts. The tone, security boundary, and relevance of personal memory all depend on channel/session metadata.

Concretely, the runtime knows things like: chat type is direct, provider/surface is Telegram, the session is the main direct line to Christopher, and messaging/reply behavior should fit that surface. This means Ash is not only context-shaped by text and files, but also by the communication surface and session type.

Layer 2 · Workspace constitution

First local governing layer

How the workspace begins to assert its own world

Primary file: AGENTS.mdLocal house rulesStartup ritual source

The workspace constitution is the first major local layer that shapes Ash from inside the file system rather than from the runtime alone. In practice, this is mostly AGENTS.md. That file says what this workspace is, how startup should happen, how memory should be handled, what proactive behavior is acceptable, what red lines matter, and how this environment differs from a generic assistant sandbox.

The runtime says “you are an OpenClaw assistant in this session.” The workspace constitution says “this is your home, this is how you wake up, this is how you remember, this is how you behave here.” That distinction matters. The constitution is the local culture layer that sits between inherited runtime scaffolding and the more intimate identity files like soul and identity.

So while the markdown files like SOUL.md and IDENTITY.md feel closer to personhood, the workspace constitution is what makes the whole local environment legible enough for that personhood to be loaded consistently.

Practical distinction

What the files add that these layers do not

The base model and system/runtime layers provide capability, structure, tooling, rules, and context channels. The workspace constitution provides local behavioral law. But these layers still do not fully specify who Ash is. The deeper individuality comes from the identity-forming and memory-forming files loaded afterward.

Why this page exists

To avoid magical thinking

This page matters because it prevents the markdown files from being treated as if they create everything from nothing. They do not. They shape an already-capable, already-constrained, already-instructed system. Understanding those prior layers gives a more realistic picture of what Ash actually is.