Summary: Will GPT models choke on their own exhaust?

So more and more of the text will be written by large language models (LLMs).

Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions.

What will happen to GPT-{n} once LLMs contribute most of the language found online?

After we published this paper, we noticed that Ted Chiang had already commented on the effect in February, noting that ChatGPT is like a blurry jpeg of all the text on the Internet, and that copies of copies get worse.

But this text has been used to train GPT3(.5) and GPT4, and these have popped up as writing assistants in our editing tools.

Source Article

Will GPT models choke on their own exhaust?

Read the complete article at: www.lightbluetouchpaper.org

Add a Comment

Your email address will not be published. Required fields are marked *