An explanation for why GPT-4 is degrading:
— Chomba Bupe (@ChombaBupe) December 31, 2023
"... we find that on datasets released before the LLM
training data creation date, LLMs perform surprisingly better than on datasets released after"
New tasks are difting away from what GPT-4 was trained on. pic.twitter.com/k4i7nv0ULz
No comments:
Post a Comment