What is one limitation of using statistical parity as a fairness metric?
Statistical parity only considers overall prediction rates, not accuracy. So a model could satisfy statistical parity while still making inaccurate predictions that disadvantage certain groups.
Why can enforcing fairness reduce an AI model's accuracy?
How could social approaches help address unfairness beyond technical methods alone?