SLS's Julian Nyarko on Why Large Language Models Like ChatGPT Treat Black and White-Sounding Names Differently

SLS's Julian Nyarko on Why Large Language Models Like ChatGPT Treat Black and White-Sounding Names Differently
4/9/2025
notes
would be nice to see these types of evaluations standardized and pulled into the leader boards
link
summary
Stanford Law School Professor Julian Nyarko discusses his research on how large language models (LLMs) perpetuate racial and cultural biases. His study uses audit designs to identify and measure bias in various scenarios, such as purchasing items and assessing capabilities. The findings show consistent disparities across different LLMs and prompt templates, with significant biases affecting the perceived value and outcomes associated with Black-sounding names. The article explores the technical challenges of addressing these biases and suggests mitigation strategies.