Bernard

An architecture for grounded memory

Bernard is the architecture that the published work is building toward: a system that forms grounded memory and concepts from lived experience.

The core idea is two complementary predictors operating over a shared embedding space. One faces outward, a world model that learns what things are like each other. Drills and impact drivers cluster together. Stairs and ladders cluster together. This is similarity structure, and current AI already does it well. The other faces inward, an associative memory that learns what things were experienced together, regardless of whether they look alike. Stairs don’t resemble a slip, but one reliably evokes the other because they were experienced within the same window of time.

Neither predictor alone produces useful memory. The outward predictor gives you “that’s a drill” — recognition without context. The inward predictor gives you “something about this moment reminds me of Tuesday” — priming without content. When both converge on the same target, you get “the drill that stripped the screw on Tuesday.” Specificity emerges from the intersection.

Concepts form through compression. When the system can’t memorise every individual experience, it extracts what recurs across many experiences. The concept discovery paper demonstrates this on text: a model trained on 10,000 novels discovers narrative functions without supervision, and those concepts transfer to unseen material. The same mechanism applied to an embodied agent’s multimodal experience would extract the functional regularities of its world. Bespoke encoders for different perceptual modalities feed into the shared space.

The entire system operates in meaning space: perception, memory, planning and action are all predictions in the same embedding space.

The published papers validate individual components. PAM establishes that temporal co-occurrence produces faithful associative recall across representational boundaries. AAR demonstrates that this improves practical retrieval. Concept discovery shows that compression produces transferable abstractions. Integrating these into the full dual-predictor architecture, and testing whether specificity-through-intersection works as predicted, is the current research direction.