Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Constrained belief updates explain geometric structures in transformer representations
by
Riechers, Paul M
, Shai, Adam S
, Piotrowski, Mateusz
, Filan, Daniel
in
Bayesian analysis
/ Constraints
/ Markov chains
/ Monolayers
/ Multilayers
/ Representations
/ Statistical inference
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Constrained belief updates explain geometric structures in transformer representations
by
Riechers, Paul M
, Shai, Adam S
, Piotrowski, Mateusz
, Filan, Daniel
in
Bayesian analysis
/ Constraints
/ Markov chains
/ Monolayers
/ Multilayers
/ Representations
/ Statistical inference
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Constrained belief updates explain geometric structures in transformer representations
Paper
Constrained belief updates explain geometric structures in transformer representations
2025
Request Book From Autostore
and Choose the Collection Method
Overview
What computational structures emerge in transformers trained on next-token prediction? In this work, we provide evidence that transformers implement constrained Bayesian belief updating -- a parallelized version of partial Bayesian inference shaped by architectural constraints. We integrate the model-agnostic theory of optimal prediction with mechanistic interpretability to analyze transformers trained on a tractable family of hidden Markov models that generate rich geometric patterns in neural activations. Our primary analysis focuses on single-layer transformers, revealing how the first attention layer implements these constrained updates, with extensions to multi-layer architectures demonstrating how subsequent layers refine these representations. We find that attention carries out an algorithm with a natural interpretation in the probability simplex, and create representations with distinctive geometric structure. We show how both the algorithmic behavior and the underlying geometry of these representations can be theoretically predicted in detail -- including the attention pattern, OV-vectors, and embedding vectors -- by modifying the equations for optimal future token predictions to account for the architectural constraints of attention. Our approach provides a principled lens on how architectural constraints shape the implementation of optimal prediction, revealing why transformers develop specific intermediate geometric structures.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.