I'm very excited about our vision for "mech interp" of CoT:
— Neel Nanda (@NeelNanda5) June 26, 2025
Study reasoning steps and their connections - analogous to activations
Don't just read it: study attn, causally intervene, and, crucially, resampling - study the distn over CoTs, not just this one
There's lots to do! https://t.co/HCYOWBXoak pic.twitter.com/JvPN1VNult
You might want to read through the whole thread. And here's the paper at the center of it:
Paul C. Bogdan, Uzay Macar, Neel Nanda, Arthur Conmy, Thought Anchors: Which LLM Reasoning Steps Matter? arXiv:2506.19143 [cs.LG] https://doi.org/10.48550/arXiv.2506.19143
Abstract: Reasoning large language models have recently achieved state-of-the-art performance in many fields. However, their long-form chain-of-thought reasoning creates interpretability challenges as each generated token depends on all previous ones, making the computation harder to decompose. We argue that analyzing reasoning traces at the sentence level is a promising approach to understanding reasoning processes. We present three complementary attribution methods: (1) a black-box method measuring each sentence's counterfactual importance by comparing final answers across 100 rollouts conditioned on the model generating that sentence or one with a different meaning; (2) a white-box method of aggregating attention patterns between pairs of sentences, which identified "broadcasting" sentences that receive disproportionate attention from all future sentences via "receiver" attention heads; (3) a causal attribution method measuring logical connections between sentences by suppressing attention toward one sentence and measuring the effect on each future sentence's tokens. Each method provides evidence for the existence of thought anchors, reasoning steps that have outsized importance and that disproportionately influence the subsequent reasoning process. These thought anchors are typically planning or backtracking sentences. We provide an open-source tool (this http URL) for visualizing the outputs of our methods, and present a case study showing converging patterns across methods that map how a model performs multi-step reasoning. The consistency across methods demonstrates the potential of sentence-level analysis for a deeper understanding of reasoning models.
No comments:
Post a Comment