Image
Michelle Greene standing in front of Milstein



On March 23, 2025, Michelle Greene, assistant professor of neuroscience & behavior and psychology, published new research in Nature Humanities & Social Sciences Communications. The paper, “Digital divides in scene recognition: uncovering socioeconomic biases in deep learning systems,” reveals significant biases in artificial intelligence models used for scene classification.

Analyzing nearly 1 million images from Airbnb listings across 200 countries and every U.S. county, Greene and her team studied how AI classifies scenes in the absence of people. They found that AI models were consistently less accurate when interpreting images from lower-income areas, often demonstrating greater uncertainty or applying harmful labels, such as classifying homes in these neighborhoods as “slum.” Additionally, AI systems linked images from lower-income areas with more negative concepts, reinforcing harmful stereotypes. These patterns were consistent across the U.S. and globally, highlighting the risks of biased AI in real-world decision-making.

The study underscores the urgent need for more representative training datasets to ensure AI systems work fairly for all communities.