LLMs output impressively human-like text, but are just pattern replication algorithms trained on human-generated prose. A vivid illustration of this difference: universal prompts which reliably 'trick' LLMs to output malicious text; no human would be fooled this way. @GaryMarcus https://t.co/YY4qJCI2nQ
— Max A. Little @maxlittle@mathstodon.xyz (@maxaltl) July 27, 2023
No comments:
Post a Comment