Wunderstory dev@wunderstory.io

Name a thing. The world makes it real.

A neurosymbolic game engine. The world is hand-crafted; an LLM authors into it while you play, writing new behaviors in the engine's own scripting language. Every change runs.

The world is the language.

For forty years, games have responded to a fixed list of verbs. Mine. Craft. Cast. Trade. The list is the design; the world is set dressing for the list.

We want the thing science fiction kept promising: the chamber that becomes whatever you describe. Not on a flat screen. Not as a chat reply. As a real entity in the world. You say a small fox that knows where it came from, and a small fox walks into the village. It comes home at dusk to the place you cast it. It pauses near things you have planted. It recognizes you when you carry something it was made of.

Tomorrow you name another, and both belong. The list never ends. The patch isn't necessary. The things the world can do are not a fixed menu we shipped: they are a language the engine speaks, and the language can grow.

A simulation that listens, and changes.

Neurosymbolic, by construction.

Pure-LLM gameplay is expensive, slow, and goes off the rails after a few minutes. Tokens spent narrating, world state and conversation drifting apart, the bill rising, the experience disintegrating. Story Garden takes the other path.

A small symbolic engine carries every entity's moment-to-moment behavior, cheap and deterministic, free of LLM cost. An LLM authors into that engine at runtime, where its imagination is the point. They meet in MiniScript, the engine's own scripting language. What the model writes runs.

An entity's behavior is a set of actions, each paired with an expression over named considerations, combined with fuzzy and, or, not, nested arbitrarily deep. A consideration is anything that returns a fuzzy number between 0 and 1. Every tick, every entity scores its tree, and the highest branch runs.

A village NPC's whole mind:

{
  "INITIATE_DIALOGUE":  "(player_in_interaction_range)",

  "SHARE_MEMORY":       "(player_distance_close
                            and (memory_sharing_opportunity or knows_song_fragment)
                            and not already_shared_with_player)",

  "RECONSTRUCT_STORY":  "(player_distance_close
                            and all_fragments_shared
                            and not story_already_reconstructed)",

  "OFFER_COMFORT":      "(player_distance_close
                            and player_seems_disheartened
                            and not recently_consoled)",

  "WANDER_LOCAL":       "(point_1)"
}

Some considerations are MiniScript functions, computing on position, memory, time, what's in someone's hand. memory_sharing_opportunity is one of them:

// memory_sharing_opportunity, fuzzy predicate, returns 0..1
memory_sharing_opportunity = function(args)
  selfPos = args["entity"]["pos"]
  targetPos = args["target"]["pos"]
  dx = selfPos.x - targetPos.x
  dy = selfPos.y - targetPos.y
  dz = selfPos.z - targetPos.z
  if sqrt(dx*dx + dy*dy + dz*dz) > 10 then return 0

  social = args["entityData"]["social_memory"]
  last = social["last_interaction_with"][args["target"]["uid"]]
  if WorldTime() - last < 30 then return 0

  return 1
end function

Other considerations are concepts the LLM is asked to score directly: "does this player seem disheartened?", "is this build inspired enough to react to?", "is this offering meaningful for the ritual?". Anything the model can assess and return as a number. No MiniScript at the leaf, just a value between 0 and 1, dropped back in for the next tick. player_seems_disheartened, in the dict above, is one of these. The neurosymbolic seam doesn't run at one boundary; it runs through every leaf.

The LLM authors at the higher layer too. It composes new behaviors from the existing palette, and writes new actions and considerations as MiniScript when its imagination needs them. Here is what the spell-weaver returned the day a player typed !cast_spell "a small fox that knows where it came from":

// spell-weaver output, hot-loaded into the live simulation
{
  "RETURN_HOME":        "(at_dusk and not at_home_location)",
  "APPROACH_FAMILIAR":  "(player_distance_close and reminded_of_origin)",
  "LINGER_AT_MEMORY":   "(near_planted_memory)",
  "WANDER_LOCAL":       "(point_1)"
}

// near_planted_memory, a new consideration, written by the model
near_planted_memory = function(args)
  pos = args["entity"]["pos"]
  planted = args["globalData"]["planted_memories"]
  for m in planted
    dx = pos.x - m["pos"].x
    dy = pos.y - m["pos"].y
    dz = pos.z - m["pos"].z
    if sqrt(dx*dx + dy*dy + dz*dz) < 5 then return 1
  end for
  return 0
end function

Old palette, new tree, one new consideration. From the fox's first tick it scores against the same loop as the village NPC, running every tick, costing nothing afterwards, persisting when the chat session is gone.

This is the move the engine is built around. The LLM isn't generating dialogue or narrating over the world. It's rewiring the symbolic engine's rules at runtime, in the engine's own language. The world's behavior shifts because the model edited it. The edits keep running, every tick, long after the model session ends.

What it makes possible.

A few of the worlds we want to see.

Worlds that learn new verbs. A player invents a fishing rod. The model writes fish, near_water, hunger_for_protein. Every villager who wants to fish, can.

Stories that respond to what you actually did. The simulation remembers what you carried, where you stood, who saw it. A spell cast a week later can read the place: every memory is data, every consideration can read it.

NPCs who become characters. A model gives a fresh creature a behavior tree, a persona, a home, and origin memories. From its first tick it is part of the simulation, scored against the same fuzzy logic as everyone else.

Worlds you can grow with players. What the model composes at runtime is real code in a real engine: inspectable, kept, edited, shipped. Nothing vanishes when the chat session ends.

Magic Pet, the first proof.

A hand-built voxel village. Three magic verbs from spawn. Every system end-to-end-verified against a real LLM.

i.

!cast_spell <wish>

Summon. make something new.

The engine aggregates everything you carry: materials, memory fragments, planted memories nearby, your typed intention. The model returns a fresh entity, with a persona, memories, a home, and a tree composed live.

ii.

!pray <question>

Communion. ask the world.

A whisper. The world-spirit answers in one short, in-character line: a voice through the air, no spawn, no item.

iii.

!plant <item>

Anchor. leave a memory.

Press one of your inventory items into the ground. It stays. Future spells cast nearby can draw on what you planted, and the place itself begins to remember.

Most "AI in games" today is text generation pasted over the screen: expensive narration, drifting away from world state, racking up tokens for an experience that disintegrates as it runs. We took the other path. An LLM that fires only where its imagination is the point, and that writes in the engine's own rules when it does. Less generative narrator, more collaborator. The result, we hope, is a different shape of game. One where the question stops being what recipe should I follow, and becomes:

What story should you tell, and how should you tell it?

Subscribe for early access when Story Garden ships. To play with it sooner, write us at dev@wunderstory.io.