LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the… pic.twitter.com/qwxt7R7RIG
— BURKOV (@burkov) February 17, 2026
Tuesday, February 17, 2026
Prompt repetition improves non-reasoning LLMs
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment