Large language models that replace human participants can harmfully misportray and flatten identity groups
View PDF
HTML (experimental)
Abstract:Large language models (LLMs) are increasing in capability and popularity, propelling their application in new domains -- including as replacements for human participants in computational social science, user testing, annotation tasks, and more. In many settings, researchers seek to distribute their surveys to a sample of participants that are representative of the underlying human population of interest. This means in order to be a suitable replacement, LLMs...
Read more at arxiv.org