This was an interesting, and delightfully short, read.
It seems that for a lot of tasks you can just repeat the input twice and get better performance out of an LLM. Granted, this is shown for pretty old models, but still!
This was an interesting, and delightfully short, read.
It seems that for a lot of tasks you can just repeat the input twice and get better performance out of an LLM. Granted, this is shown for pretty old models, but still!