// coming later · optional spatial workstation
BumbleAR Presence in your space.
BumbleAR is coming later. It will ship as an optional layer — a separate workstation / interface / software from the core Bumblebee harness — for the same entitative ideas in 3D: an agent with a body rendered in any space, driven over WebSocket by the Spatial Action protocol. Speak, move, emote, and float panels in the scene while the brain stays on your inference stack (including the Bumblebee gateway). You will not need it to run entities in CLI, Telegram, or Discord.
It’s as easy as dropping in your favorite 3D models and animations from places like Mixamo — then letting the same Bumblebee assistant show a body in the scene, not only in chat.
// preview · architecture
What we’re building
The Spatial Action API is the vocabulary agents will use to act in space; the
Presence Protocol will carry messages between your runtime and the shell. Work in
progress lives in the monorepo under bumbleAR/ — see docs/protocol.md and
docs/agent-runtime.md if you want the technical picture ahead of release.
WebXR shell
Planned: a Three.js surface in the browser or on Quest where the entity shows up as a rigged character with voice and behaviors from shell YAML.
Agent adapter
A Node AgentAdapter will receive session events and reply through a
SpatialToolkit — move, look, gesture, emote, stream speech, and spatial UI panels.
Same brain, new stage
Shell binding.baseUrl to a runtime (ws:// local or wss://
deployed). Docs under bumbleAR/docs/ describe the direction of travel; the shipped
workstation may package these pieces for you.
When it ships, expect an optional download or package — not a requirement for the main harness — aimed at WebXR-first workflows and builders who want presence beyond chat.