Article Details
Retrieved on: 2025-01-03 20:07:23
Tags for this article:
Click the tags to see associated articles and topics
Summary
The article discusses how large language models in computer science, like ChatGPT, display similar social identity biases to humans, such as in-group favoritism and out-group prejudice. By curating training data, these biases can be mitigated.
Article found on: www.futurity.org
This article is found inside other hiswai user's workspaces. To start your own collection, sign up for free.
Sign UpAlready have an account? Log in here