In-group bias AI Ethics & Fairness; Human-centered AI
noun phrase
Definition: A bias in which people tend to favor, trust, or evaluate more positively members of their own group than members of other groups. In AI contexts, this can affect annotation, data collection, evaluation, and the interpretation of model outputs [Currarini, Mengel 2016].
Example in context: “Similarly, Giorgi et al. [12] reported that while human annotators show mild in-group bias and demographic variation, persona-based LLMs also display bias but align poorly with human annotation patterns…” [Voutsa et al. 2025]
Synonyms: ingroup favoritism
Related terms: out-group bias; out-group homogeneity bias; group attribution bias