Algorithmic Bias
Systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
1. Data Collection
2. Labeling
3. Feature Selection
4. Objective Function
5. Deployment
Demographic Parity
Positive outcome rates are equal across subgroups, regardless of true label.
Equalized Odds
True positive and false positive rates are equal across subgroups.
Calibration
Predicted probabilities reflect true likelihood equally for all groups.
Subgroup AUC
Model discriminative power evaluated separately per demographic group.
Pre-Processing
Modify training data to remove underlying bias before training.
In-Processing
Modify learning algorithm to penalize discriminatory outcomes.
Post-Processing
Adjust model predictions to enforce fairness constraints.
Model Cards & Datasheets
Standardized documentation of intended use, training data demographics, and known limitations.
Algorithmic Audits
Independent, third-party evaluation of system performance across intersectional subgroups.
Human-in-the-Loop
Mandatory operator review for high-stakes decisions with override authority.