A practical guide into the AllenNLP Fairness module.
allennlp.fairness
aims to make fairness metrics, fairness training tools, and bias mitigation algorithms extremely easy to use and accessible to researchers and practitioners of all levels.
allennlp.fairness
empowers everyone in NLP to combat algorithmic bias, that is, the "unjust, unfair, or prejudicial treatment of people related to race, income, sexual orientation, religion, gender, and other characteristics historically associated with discrimination and marginalization, when and where they manifest in algorithmic systems or algorithmically aided decision-making" (Chang et al. 2019). Ultimately, "people who are the most marginalized, people who’d benefit the most from such technology, are also the ones who are more likely to be systematically excluded from this technology" because of algorithmic bias (Chang et al. 2019).