LLMs output impressively human-like text, but are just pattern replication algorithms trained on human-generated prose. A vivid illustration of this difference: universal prompts which reliably 'trick' LLMs to output malicious text; no human would be fooled this way. @GaryMarcus https://t.co/YY4qJCI2nQ
— Max A. Little @maxlittle@mathstodon.xyz (@maxaltl) July 27, 2023
Saturday, July 29, 2023
Current LLMs can be tricked in ways that would never trick a real human
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment