SLS's Julian Nyarko on Why Large Language Models Like ChatGPT Treat Black and White-Sounding Names Differently

SLS's Julian Nyarko on Why Large Language Models Like ChatGPT Treat Black and White-Sounding Names Differently

4/9/2025

notes

would be nice to see these types of evaluations standardized and pulled into the leader boards

link

https://law.stanford.edu/2024/03/19/slss-julian-nyarko-on-why-large-language-models-like-chatgpt-treat-black-and-white-sounding-names-differently/

summary

Stanford Law School Professor Julian Nyarko discusses his research on how large language models (LLMs) perpetuate racial and cultural biases. His study uses audit designs to identify and measure bias in various scenarios, such as purchasing items and assessing capabilities. The findings show consistent disparities across different LLMs and prompt templates, with significant biases affecting the perceived value and outcomes associated with Black-sounding names. The article explores the technical challenges of addressing these biases and suggests mitigation strategies.

tags

LLMs ꞏ ChatGPT ꞏ Algorithmic bias ꞏ Racial bias ꞏ Gender bias ꞏ Audit design ꞏ AI ethics ꞏ Stanford Law School ꞏ Julian Nyarko