Go to Twitter for the rest of the tweet stream. Here's the abstract of the article:We prompt GPT-{2|J} with subject-relation queries (“Beats Music is owned by”) from CounterFact (@mengk20 @davidbau) and intervene on attention edges (similar to @hmohebbi75) to analyze how information is aggregated across layers and positions to predict the attribute (“Apple”)
— Mor Geva (@megamor2) May 1, 2023
Transformer-based language models (LMs) are known to capture factual knowledge in their parameters. While previous work looked into where factual associations are stored, only little is known about how they are retrieved internally during inference. We investigate this question through the lens of information flow. Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute. With interventions on attention edges, we first identify two critical points where information propagates to the prediction: one from the relation positions followed by another from the subject positions. Next, by analyzing the information at these points, we unveil a three-step internal mechanism for attribute extraction. First, the representation at the last-subject position goes through an enrichment process, driven by the early MLP sublayers, to encode many subject-related attributes. Second, information from the relation propagates to the prediction. Third, the prediction representation "queries" the enriched subject to extract the attribute. Perhaps surprisingly, this extraction is typically done via attention heads, which often encode subject-attribute mappings in their parameters. Overall, our findings introduce a comprehensive view of how factual associations are stored and extracted internally in LMs, facilitating future research on knowledge localization and editing.
No comments:
Post a Comment