What is one limitation of using statistical parity as a fairness metric?
Statistical parity only considers overall prediction rates, not accuracy. So a model could satisfy statistical parity while still making inaccurate predictions that disadvantage certain groups.
Why can enforcing fairness reduce an AI model's accuracy?
Fairness constraints essentially make the modeling task more difficult by restricting the model. This can come at a cost to performance on the original predictive task.
How could social approaches help address unfairness beyond technical methods alone?
Social approaches like stakeholder participation and anti-discrimination policies can tackle root causes of unfairness in data and systems. Technical tools alone often fail to address these systemic biases.