Scalable watermarking for identifying large language model outputs
MainLarge language models (LLMs) are widely adopted tools for synthetic text generation, finding applications in language-based assistants, code generation, writing support and various other domains. As LLMs advance in quality, coherence, coverage and expertise, it can become difficult to distinguish synthetically generated text from human-written text1,2,3. Given the widespread use of LLMs in education, software development and web content generation, identification and attribution of LLM text ...
Read more at nature.com