Posted by z3d in ArtInt

A new study suggests that the enthusiasm for self-generative AI might dwindle as it starts looking at its own content instead of books and databases created by humans.

Large language models (LLMs) and other transformer models underpinning products such as ChatGPT, Stable Diffusion and Midjourney come initially from human sources -- books, articles, and photographs that were created without the help of artificial intelligence. But as more people use AI to produce and publish content that content will gradually pollute the internet, and AI models begin to train on it.

Writing in the open-access journal arXiv a team of boffins from Cambridge University and the University of Edinburgh found that model-generated content in training causes irreversible defects in the resulting models.

Full paper: https://arxiv.org/pdf/2305.17493v2.pdf

3

Comments

You must log in or register to comment.

There's nothing here…