Project Description

In response to concerns of algorithmic bias against marginalized communities, counterfactual explanations have been proposed as a common approach for providing actionable recourse to individuals who are negatively impacted by algorithmic decisions. Most prior work on actionable recourse assumes a two-stage pipeline where Machine Learning models are trained first, and are then used as a black box to generate counterfactual explanations. In collaboration with the Child Poverty Action Lab (CPAL), I have developed CounterNet, a highly generalizable neural-network based learning framework that combines the training of the ML predictive model and the generation of corresponding recourse examples into a single end-to-end (i.e., from input to prediction to explanation) pipeline. I developed novel loss function formulations and provably efficient backpropagation schedules to effectively train CounterNet's architecture. I am currently in the process of collaborating with CPAL to provide actionable recourse for impoverished individuals who have been evicted from Dallas based housing communities.

Publications