News from the Office of District Attorney George Gascón



June 12, 2019

Twitter: @GeorgeGascon

CONTACT: ALEX BASTIAN (415) 553-1931   |   MAX SZABO (415) 553-9089




SAN FRANCISCO — Today, District Attorney George Gascón announced the implementation of a new open-source bias mitigation tool that uses artificial intelligence to remove the potential for implicit bias from prosecutors’ charging decisions.  The new tool, which was developed at no cost to the San Francisco District Attorney’s Office (SFDA) by the Stanford Computational Policy Lab, scans police incident reports and automatically eliminates race information and other details that can serve as a proxy for race in order to ensure prosecutors’ charging decisions are not influenced by implicit biases.  


“Lady justice is depicted wearing a blindfold to signify impartiality of the law, but it is blindingly clear that the criminal justice system remains biased when it comes to race,” said District Attorney George Gascón. “This technology will reduce the threat that implicit bias poses to the purity of decisions which have serious ramifications for the accused, and that will help make our system of justice more fair and just.”


Sharad Goel, assistant professor at Stanford University, who is leading the lab’s effort to develop the bias mitigation tool added, “The Stanford Computational Policy Lab is pleased that the District Attorney’s office is using the tool in its efforts to limit the potential for bias in charging decisions and to reduce unnecessary incarceration.”


Implicit biases are unconsciously-held associations about a social group that can result in the attribution of particular qualities to all individuals from that group.  Implicit bias is the product of learned associations and social conditioning.  While researchers have previously concluded that SFDA does not exacerbate preexisting racial disparities in the criminal justice system, DA Gascón felt there was more SFDA could do to enhance fairness, prompting him to request the assistance of the Stanford Computational Policy Lab.


During the bias mitigation review, or phase 1, the artificial intelligence technology reviews the police incident report and automatically redacts information which indicates the race of the parties.  For example, it will redact the names of the officers, witnesses and suspects, since names often identify the race of the individual. Officer star numbers will also be removed so the officer cannot be identified.  The tool will additionally eliminate locations, such as specific neighborhoods or districts since such information can also suggest the race of the individuals involved.  Hair and eye color will also be redacted.  The tool automatically replaces this race-suggestive information with generic labels such as race, name, neighborhood, etc.  


After completing the bias mitigation review in phase 1 prosecutors will record a preliminary charging decision. During the full review, or phase 2, they will have access to the full unredacted incident report as well as other non-race blind information such as body camera footage. While a police narrative often provides the information necessary to make a charging decision, the final decision may change from phase 1 to phase 2 after a review of the totality of the evidence.  If a prosecutor drops or adds charges, they will document what additional evidence resulted in the final charging decision.  SFDA will collect and review these metrics to identify the volume and types of cases where charging decisions changed from phase 1 to phase 2 in order to refine the tool and to take steps to further remove the potential for implicit bias to enter our charging decisions.


The Stanford Computational Policy Lab is making the tool available to other prosecutors’ offices free of charge.  SFDA’s general felonies teams are scheduled to fully implement the tool beginning July 1, 2019.