Large Language Models (LLMs, like Chat GPT) use a large body of existing texts in training themselves so that they can answer questions by producing text that looks like the work of humans. The underlying texts used in training LLMs should be generated by humans—but are they? Researchers from the Swiss Federal Institute of Technology…… Continue reading Does training of AI models rely too much on input from other AI models?