News Score: Score the News, Sort the News, Rewrite the Headlines

Most leading chatbots routinely exaggerate science findings

ChatGPT is often asked for summaries, but how accurate are these really? It seems so convenient: when you are short of time, asking ChatGPT or another chatbot to summarise a scientific paper to quickly get a gist of it. But in up to 73 per cent of the cases, these large language models (LLMs) produce inaccurate conclusions, a new study by Uwe Peters (Utrecht University) and Benjamin Chin-Yee (Western University and University of Cambridge) finds.Almost 5,000 LLM-generated summaries analyse...

Read more at uu.nl

© News Score  score the news, sort the news, rewrite the headlines