LISTEN TO ARTICLE
SHARE THIS ARTICLE
Mathematicians are hotly debating whether to withhold their support from “predictive policing,” which is the use of algorithms to forecast where crimes will occur and who might commit them. “Given the structural racism and brutality in US policing, we do not believe that mathematicians should be collaborating with police departments in this manner. It is simply too easy to create a ‘scientific’ veneer for racism,” says aletter submitted to Notices of the American Mathematical Society on June 15. More than 1,500 researchers have joined the boycott,according to Popular Mechanics.
But a Black mathematician at Rutgers University, Daniel Krashen, argues in the math journal’s October issue that disengagement is the wrong answer. Here’s an excerpt:
Police patrolling will not simply end. If mathematicians, scientists, and others don’t come together to help formulate algorithms about patrolling, we can do little to influence the potential bias that the police can (and likely will) bring. But if the algorithms used by the police are transparent, and placed in a forum of public scientific discussion, we can work together to find potential sources of bias and inequity, and address them. If the algorithms aren’t available, if the police obtain them through businesses that keep them confidential, this conversation can never happen, and this is when society will really suffer.
I don’t think this is the time for academics to walk away from the conversation with the police or with other institutions and companies, but rather now is the time to go deeper, to analyze how particular algorithms are used, to push for maximal transparency, and to try to identify and correct bias where we can.
Krashen writes that predictive policing is outside his fields of expertise (which happen to be noncommutative algebra and arithmetic geometry) but says he’s been familiarizing himself with the literature. “As scientists,” he writes, “we need to engage with the algorithms that do exist, to test them, to critique them, and to work to fix them when needed.”
Algorithms don’t have to be harmful. In some cities they’re being used to “predict when police will go rogue,” according toa July article in Bloomberg Businessweek by Joshua Brustein. Bloomberg Opinion columnist Cathy O’Neil wrote in June that algorithms could be used to figure out how much of someone’s crime risk is related to factors such as poor mental health and divert some money from police and prison budgets to directly address those underlying conditions.
“Weapons of math destruction,” as O’Neil called them in herbook of that title, probably can’t be banished, but it should be possible to use them for good, not evil.
Source: Read Full Article