A Unifying Human-Centered AI Fairness Framework

Reading time: 5 minute
...

📝 Original Info

  • Title: A Unifying Human-Centered AI Fairness Framework
  • ArXiv ID: 2512.06944
  • Date: 2025-12-07
  • Authors: Munshi Mahbubur Rahman, Shimei Pan, James R. Foulds

📝 Abstract

The increasing use of Artificial Intelligence (AI) in critical societal domains has amplified concerns about fairness, particularly regarding unequal treatment across sensitive attributes such as race, gender, and socioeconomic status. While there has been substantial work on ensuring AI fairness, navigating trade-offs between competing notions of fairness as well as predictive accuracy remains challenging, creating barriers to the practical deployment of fair AI systems. To address this, we introduce a unifying human-centered fairness framework that systematically covers eight distinct fairness metrics, formed by combining individual and group fairness, infra-marginal and intersectional assumptions, and outcome-based and equality-of-opportunity (EOO) perspectives. This structure allows stakeholders to align fairness interventions with their values and contextual considerations. The framework uses a consistent and easy-to-understand formulation for all metrics to reduce the learning curve for non-experts. Rather than privileging a single fairness notion, the framework enables stakeholders to assign weights across multiple fairness objectives, reflecting their priorities and facilitating multi-stakeholder compromises. We apply this approach to four real-world datasets: the UCI Adult census dataset for income prediction, the COMPAS dataset for criminal recidivism, the German Credit dataset for credit risk assessment, and the MEPS dataset for healthcare utilization. We show that adjusting weights reveals nuanced trade-offs between different fairness metrics. Finally, through case studies in judicial decision-making and healthcare, we demonstrate how the framework can inform practical and value-sensitive deployment of fair AI systems.

💡 Deep Analysis

Figure 1

📄 Full Content

A Unifying Human-Centered AI Fairness Framework Munshi Mahbubur Rahman∗ Department of Information Systems University of Maryland, Baltimore County Baltimore, MD, USA mrahman4@umbc.edu Shimei Pan Department of Information Systems University of Maryland, Baltimore County Baltimore, MD, USA shimei@umbc.edu James R. Foulds Department of Information Systems University of Maryland, Baltimore County Baltimore, MD, USA jfoulds@umbc.edu December 9, 2025 Abstract The increasing use of Artificial Intelligence (AI) in critical societal domains has amplified concerns about fairness, particularly regarding unequal treatment across sensitive attributes such as race, gender, and socioeconomic status. While there has been substantial work on ensuring AI fairness, navigating trade-offs between competing notions of fairness as well as predictive accuracy remains challenging, which is a barrier to the practical deployment of fair AI systems. To address this, we introduce a unifying human-centered fairness framework that systematically covers eight distinct fairness metrics, formed by combining individual vs. group fairness, infra-marginal vs. intersectional assumptions, and outcome-based vs. equality-of- opportunity (EOO) options, thereby allowing stakeholders to align fairness interventions with their value systems and contextual considerations. The framework uses a consistent and easy to understand formulation for all metrics to reduce the learning curve for non-expert stakeholders. Rather than privileging a single fairness notion, our framework enables stakeholders to assign weights across multiple fairness objectives, reflecting their priorities and values, and enabling multi-stakeholder compromises. We apply this approach to four real-world datasets—the UCI Adult census dataset for income prediction, the COMPAS dataset for criminal recidivism in the justice system, the German Credit dataset for credit risk assessment in financial services, and the MEPS dataset for healthcare utilization—and demonstrate how adjusting weights reveals nuanced trade-offs between different fairness metrics. Finally, through two stakeholder-grounded case studies in judicial decision-making and healthcare, we show how the framework can inform practical and value-sensitive deployment of fair AI systems. ∗Corresponding author. 1 arXiv:2512.06944v1 [cs.LG] 7 Dec 2025 1 Introduction The widespread deployment of Artificial Intelligence (AI) systems in high-stakes domains—such as healthcare, criminal justice, and financial services—has raised significant concerns about fairness and bias Angwin et al. [2016], Obermeyer et al. [2019], Barocas et al. [2020]. Studies have shown that these systems can inherit, perpetuate, or even exacerbate historical and systemic biases present in their training data or decision-making pipelines, leading to disparate outcomes across demographic groups. For instance, risk assessment tools used in the criminal justice system have been shown to assign higher recidivism risk scores to Black defendants compared to White defendants with similar profiles Angwin et al. [2016]. In healthcare, widely-used predictive models have demonstrated racial bias in prioritizing patients for care Obermeyer et al. [2019]. Although there is increasing research focused on strategies to mitigate bias in AI, these approaches have yet to become widely adopted in the practical implementation of AI systems in various industries, governmental bodies, and the public sector AI Now Institute [2023]. One significant reason for this limited adoption is the intricate nature of fairness itself. Achieving fairness in AI systems requires reconciling conflicting technical definitions of fairness and the underlying societal values they represent. This challenge arises from inherent conflicts among fairness definitions and the trade-offs between fairness and predictive performance, creating a dilemma for developers and organizations striving to design equitable systems. There is also the concern that an intense focus on fairness may adversely affect the accuracy of AI systems, creating a perceived trade-off between fairness and predictive performance Hardt et al. [2016]. In this work, we address these challenges through a unifying fairness optimization framework that enables stakeholders to navigate and balance conflicting fairness criteria. The complexity of achieving fairness lies in balancing multiple, often incompatible, fairness metrics and addressing the societal assumptions underlying them Berk et al. [2021]. Identifying fairness metrics that align with different societal values and assumptions is often the starting point for fairness interventions in AI systems Blodgett et al. [2020]. Broadly, AI fairness definitions can be categorized along two dimensions: the granular level of protection and the approach to addressing disparities. Individual fairness emphasizes providing comparable outcomes for individuals with similar qualifications, whereas group fa

📸 Image Gallery

ADULT_distribution.png COMPAS_all_distr.png GERMAN_distribution.png MEPS_distribution.png accuracy_fairness_adult.png accuracy_fairness_adult_eoo.png accuracy_fairness_adult_outcome.png accuracy_fairness_compas.png accuracy_fairness_compas_eoo.png accuracy_fairness_compas_outcome.png accuracy_fairness_german_eoo.png accuracy_fairness_german_outcome.png accuracy_fairness_meps_eoo.png accuracy_fairness_meps_outcome.png adult_fairness_tradeoff_eoo.png adult_fairness_tradeoff_outcome.png adult_fairness_tradeoff_outcome_vs_eoo.png compas_fairness_tradeoff_eoo.png compas_fairness_tradeoff_outcome.png compas_fairness_tradeoff_outcome_vs_eoo.png german_fairness_tradeoff_eoo.png german_fairness_tradeoff_outcome.png german_fairness_tradeoff_outcome_vs_eoo.png meps_fairness_tradeoff_eoo.png meps_fairness_tradeoff_outcome.png meps_fairness_tradeoff_outcome_vs_eoo.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut