Neurosymbolic, by construction.
Pure-LLM gameplay is expensive, slow, and goes off the rails after a few minutes. Tokens spent narrating, world state and conversation drifting apart, the bill rising, the experience disintegrating. Story Garden takes the other path.
A small symbolic engine carries every entity's moment-to-moment behavior, cheap and deterministic, free of LLM cost. An LLM authors into that engine at runtime, where its imagination is the point. They meet in MiniScript, the engine's own scripting language. What the model writes runs.
An entity's behavior is a set of actions, each paired with an expression over named considerations, combined with fuzzy and, or, not, nested arbitrarily deep. A consideration is anything that returns a fuzzy number between 0 and 1. Every tick, every entity scores its tree, and the highest branch runs.
A village NPC's whole mind:
{
"INITIATE_DIALOGUE": "(player_in_interaction_range)",
"SHARE_MEMORY": "(player_distance_close
and (memory_sharing_opportunity or knows_song_fragment)
and not already_shared_with_player)",
"RECONSTRUCT_STORY": "(player_distance_close
and all_fragments_shared
and not story_already_reconstructed)",
"OFFER_COMFORT": "(player_distance_close
and player_seems_disheartened
and not recently_consoled)",
"WANDER_LOCAL": "(point_1)"
}
Some considerations are MiniScript functions, computing on position, memory, time, what's in someone's hand. memory_sharing_opportunity is one of them:
// memory_sharing_opportunity, fuzzy predicate, returns 0..1
memory_sharing_opportunity = function(args)
selfPos = args["entity"]["pos"]
targetPos = args["target"]["pos"]
dx = selfPos.x - targetPos.x
dy = selfPos.y - targetPos.y
dz = selfPos.z - targetPos.z
if sqrt(dx*dx + dy*dy + dz*dz) > 10 then return 0
social = args["entityData"]["social_memory"]
last = social["last_interaction_with"][args["target"]["uid"]]
if WorldTime() - last < 30 then return 0
return 1
end function
Other considerations are concepts the LLM is asked to score directly: "does this player seem disheartened?", "is this build inspired enough to react to?", "is this offering meaningful for the ritual?". Anything the model can assess and return as a number. No MiniScript at the leaf, just a value between 0 and 1, dropped back in for the next tick. player_seems_disheartened, in the dict above, is one of these. The neurosymbolic seam doesn't run at one boundary; it runs through every leaf.
The LLM authors at the higher layer too. It composes new behaviors from the existing palette, and writes new actions and considerations as MiniScript when its imagination needs them. Here is what the spell-weaver returned the day a player typed !cast_spell "a small fox that knows where it came from":
// spell-weaver output, hot-loaded into the live simulation
{
"RETURN_HOME": "(at_dusk and not at_home_location)",
"APPROACH_FAMILIAR": "(player_distance_close and reminded_of_origin)",
"LINGER_AT_MEMORY": "(near_planted_memory)",
"WANDER_LOCAL": "(point_1)"
}
// near_planted_memory, a new consideration, written by the model
near_planted_memory = function(args)
pos = args["entity"]["pos"]
planted = args["globalData"]["planted_memories"]
for m in planted
dx = pos.x - m["pos"].x
dy = pos.y - m["pos"].y
dz = pos.z - m["pos"].z
if sqrt(dx*dx + dy*dy + dz*dz) < 5 then return 1
end for
return 0
end function
Old palette, new tree, one new consideration. From the fox's first tick it scores against the same loop as the village NPC, running every tick, costing nothing afterwards, persisting when the chat session is gone.
This is the move the engine is built around. The LLM isn't generating dialogue or narrating over the world. It's rewiring the symbolic engine's rules at runtime, in the engine's own language. The world's behavior shifts because the model edited it. The edits keep running, every tick, long after the model session ends.