Anthropology for AI
Abstract
This paper introduces several concepts from Anthropology for AI, arguing that what we call the AI alignment problem is a social problem. AI’s underlying assumptions are based on one culture's understanding of humans – as rational, bounded, utility-maximizing and approaches to value alignment that operate on computational behaviorism, assuming values can be inferred from observable behaviors and encoded into optimization functions. However, we miss examining the cultural contexts that make behaviors meaningful. Concepts from anthropology can provide a critical eye and expose AI's hidden assumption. I explain the following concepts and why they are providing a corrective lens to AI today: origin story helps us understand how cultural narratives shape which futures seem possible and how power is distributed, cargo cult surfaces how mimicry and simulation may be mistaken for understanding, monoculture describes how homogenization can breed systemic collapse, thick description is a useful concept countering current approaches to "thin" alignment, and finally liminality as an opportunity for transformation and envisioning a new social order. Anthropology offers not technical fixes but a reframing: when developers claim to align AI with human values, we must ask whose values, defined by whom, serving what purposes.
DOI: 10.5671/ca.49.1.1
Keywords*
Full Text:
PDF
This work is licensed under a Creative Commons Attribution NonCommercial International License 4.0.