Powered by WPeMatico
Last September, Manhattan Institute fellow Heather Mac Donald, a longtime foe of police reform, testified before a U.S. congressional committee that the reported “epidemic of racially biased police shootings of black men” is false.
In fact, “if there is a bias in police shootings, it is against white civilians,” she said, citing a recent study released by the prestigious Proceedings of the National Academy of Sciences (PNAS).
Mac Donald’s take on the study was generous. Its authors, University of Maryland psychology professor David Johnson and Michigan State University psychology professor Joseph Cesario, weren’t actually making a point about police bias at all. The study was about identifying the race of police officers involved in fatal shootings and showing whether or not it matched the race of their victims, not shedding light on motive, Johnson told CityLab. But after the study came out, other scientists in the field criticized its methodology, prompting Johnson and Cesario to concede a mistake in the way they characterized the study.
Despite the apology, the study has continued to fuel a long-running debate of great significance as localities grapple with how to improve disproportionate rates of police violence against African Americans: Is police violence towards African Americans mostly explained by cops’ racial prejudice? The short answer is that it’s difficult to arrive at a scientific conclusion, because the data is lacking.
Princeton University politics professors Jonathan Mummolo and Dean Knox were among the academics who criticized the study and questioned the value of knowing the race of police officers involved in fatal shootings at all. In January, they published a letter in PNAS and an op-ed in The Washington Post stating that the Johnson-Cesario study “was based on a logical fallacy and erroneous statistical reasoning, and sheds no light on whether police violence is racially biased.”
To determine whether racist motivations are fueling police shootings, you would need to know the race of the people killed by police in a given department, and also the race of all the people who police shot, but didn’t kill. Perhaps the most difficult datapoint is that you would also need the race of all the people police came in contact with, but did nothing to at all. This cumulative data is called the police encounter rate, and the scientists who have been studying police violence say that it is the most critical yet most elusive data needed to register racial bias.
In explaining why the encounter rates matter in this discussion, Mummolo and Knox offer a thought experiment with an unrealistic but easy-to-follow fact pattern: Let’s say an all-African-American police force encountered 90 black civilians and 10 white civilians in a given week, and among those encounters, the officers shot and killed five African Americans and nine white civilians. Then, imagine a white police force encountered 90 white and 10 black civilians in a week, and also killed nine white people and five black people.
Both departments are responsible for an equal number of lives from both races taken. However, the percentage of lives taken in each race is different when the encounter rates are considered: The black police force shot 5.6 percent of the black civilians and 90 percent of the white civilians they encountered, while the white police force shot 50 percent of the back civilians and 10 percent of the white civilians they encountered.
Viewed through the lens of the thought experiment, one can see why it’s inaccurate to say there is an anti-white bias or any other kind of bias in police shootings, as Mac Donald testified.
“I’m not happy with the way that [Mac Donald] characterized our study,” said Johnson. “She characterized it as if we gave information about bias on the behalf of officers. We’re not trying to make statements about the likelihood of being shot by police officers if you’re black, and we don’t have the data to do that.”
But he and Cesario made the mistake of writing in the study’s statement of significance that “White officers are not more likely to shoot minority civilians than non-White officers.” In a response to critics published last August, Johnson and Cesario wrote:
We should have written this sentence more carefully. … What we should have written was a sentence about what we did estimate: As the proportion of White officers in a [fatal officer-involved shooting] increased, a person fatally shot was not more likely to be of a racial minority. This was our mistake, and we appreciate the feedback on this point.
While Johnson says their study was not intended to infer racial bias, Mummolo is concerned that leaving the bias question unresolved has consequences, such as leading policy influencers like Mac Donald to make their own incorrect inferences.
“I don’t know what [Johnson’s study] teaches us,” says Mummolo. “It does not teach us that one [racial] group of officers is more or less likely to shoot, and we all seem to agree on that now. They say there’s this absence of a correlation, but that could mean any number of things. Without the other information and the data that are missing, there’s just no way to say what it means.”
Johnson agreed that having the encounter rates is important, but not for the purposes of his study, and he and Cesario are standing by the utility of the analysis, as seen in their reply to Mummolo’s PNAS letter. What the study tells us if nothing else, said Johnson, is the racial demographics of the police officers involved in fatal shootings, which he says has not been previously accumulated in any nationwide studies on police violence.
“I want to stress how hard it was to get information about these police officers,” said Johnson. “It took over 1,800 hours requesting information from police, looking at legal cases and legal documents as well as media accounts. We didn’t know any of that before we started on this analysis.”
It’s debatable what simply knowing the race of the officers tells us. In the context of the Black Lives Matter movement, people are concerned with how to eliminate anti-black prejudices, if that’s what is driving cops to be more violent towards black people. And an anti-black bias can come from a cop of any race, including black. According to Phillip A. Goff, president and co-founder of the Center For Policing Equity, the data on officer characteristics are neither unprecedented nor necessary for understanding police violence.
“Nobody who had done responsible analyses of this would be surprised by that,” said Goff, “because as they admit in their paper, black officers are more likely to be patrolling in black neighborhoods. So of course they’re more likely to shoot black people because of proximity. If that is their only argument, then they are saying, ‘We have nothing novel to say.’”
What they all agree on is that there is too little data collected on police violence—the Calvary hill that almost all studies that attempt to address police brutality and racial bias get crucified on. Mummolo said that it is possible that there are ways for academics to get close to police encounter rates, such as by using traffic camera footage in some instances, or using responses from the Police Public Contact Survey. But these would still fall short of the data needed to draw solid conclusions about race and policing.
“The rigor around the science of racism and discrimination is less than it should be, on all sides, and it reduces science to conversations about ideological entrenchments rather than about novel discoveries about the way that the world is shaped,” said Goff. “That makes us all less well-positioned to improve the world as we find it. We should feel badly about that, and we should do better.”
Powered by WPeMatico
Ride-hail apps like Uber, Via, and Lyft have made transportation more egalitarian in some ways, reducing the chances a taxi will bypass a person of color for the white customer just down the street, or connecting underserved neighborhoods to the surrounding community. But until self-driving vehicles take over, human bias is still a problem.
After studies found that people of color face longer wait times to be matched with a driver—sometimes 35 percent longer than white riders—by last year, most ride-hail platforms had responded by limiting the information drivers receive about potential riders, the study says. For many of the services on these platforms, drivers can no longer see the name, profile picture, or drop-off location of customers before accepting a ride. The hope was that discrimination would decrease.
But, as a recent report shows, bias will find a way.
The report, “When Transparency Fails: Bias and Financial Incentives in Ridesharing Platforms” by Jorge Mejia of Indiana University and Chris Parker of American University, found that people from underrepresented minorities are more than twice as likely to have a ride canceled during non-peak hours as whites. While it wasn’t as significant a difference, the study found that riders who signal support for the LGBT community are 1.46 times more likely to be canceled on during non-peak hours, and over half as likely during peak hours.
To get at this, authors called around 3,200 rides in Washington, D.C., from a central Metro stop and indicated a major airport as the final destination. After a ride was accepted, the driver was able to see a profile photo and name. The researchers used photos of real people taken from a database of faces used for research and AI training. They controlled for similar levels of perceived “attractiveness” by using two websites that rate a photo subject’s attractiveness—one by using AI; the other, crowdsourcing. They also assigned the customers names that studies show people associate with a particular race: among them, Allison, Greg, Latoya, and Jamal. To indicate LGBT support, they used an optional rainbow filter that is widely used to indicate affinity for LGBT causes.
Mejia and Parker then observed driver behavior once the ride had been accepted and the rider’s photo and information revealed. They waited three minutes to allow the driver to cancel; if the driver seemed to intend to complete the ride, the researchers canceled the ride and the driver received a cancellation fee. While the authors concluded that the ride-hailing platforms’ decision on when to reveal rider information eliminated bias at the ride-request stage, it seems that it merely shifted the timing of when discrimination strikes.
Peak timing, however, did seem to moderate bias, particularly with people of color. “For underrepresented minorities, we see that the pricing mechanism wins,” Parker, an assistant professor of information technology and analytics, told CityLab. “If you put yourself in the shoes of someone making a biased decision, they might say, ‘For $15, I’m not going to take this person, but for $30, I’ll take this person.’ That’s where the financial mechanism of peak times with higher prices could be beneficial.”
But timing didn’t matter for LGBT supporters: Drivers were almost as likely to cancel on them whether during off-peak or peak timing, the study found. In this scenario, Parker said, it’s difficult to understand where the bias lies. “Is it really biased against LGBT people, or is it against people who feel so strongly about some social cause that they’re going to talk your ear off about it?” said Parker. “If I put the symbol for Greenpeace over my photo, would it have the same effect? As a driver, am I worried they’re going to talk my ear off about the environment? I don’t think that’s what this case is but we can’t rule it out as a scientific explanation.”
Parker thinks that the estimate of driver bias he and Mejia found for LGBT supporters actually might be lower than the national average. “D.C. has a relatively large number of LGBT communities and supporters,” he said. “If you were to go somewhere in the country with fewer, I would expect this effect to be even larger. Similar arguments can be made along the racial dimension.”
Of course, ride-hail companies insist that their drivers are not employees: They are independent contractors who use the platform to connect them to riders. So holding drivers accountable for their biases is difficult. “But either way, the platform is the one that holds the PR risk,” said Parker. “In general, the platform doesn’t want drivers that will provide a bad ride experience.”
Parker said that his and Mejia’s interest in the subject was sparked by a personal experience: Mejia, who identifies as Hispanic, started paying attention to driver bias a few years ago, when he noticed that his wife, who is white, generally had a much shorter wait time when trying to book a ride than he did. (This was before ride-hail platforms shifted the timing of information given to drivers.) One purpose of the study is to help platforms reflect on the type and timing of information they give to drivers in order to reduce experiences like Mejia’s. “We hope to start a conversation about the rider-driver relationship and emphasize the important role of platform governance,” the study reads.
But the results present a dilemma: If drivers continue to receive information only after they’ve accepted the ride, driver bias could lead to longer wait times in the long run. If a driver cancels after the initial acceptance, the rider’s wait to find a driver is even longer than if the driver never accepted the ride. So, has the experience for the rider improved?
The study suggests that company-gathered data on wait times and cancellation rates across demographics should be made available in order to effectively address the problem. “Most of the previous studies before the change [to a later customer reveal] looked at matching times and quoted wait times,” said Parker. “They weren’t even measuring cancellation times, so we don’t have a great comparison of before and after. We need to get into these companies and say, ‘Let us analyze your data; let us figure out what happened.’ Without getting data from before the change and after the change, we’ll have a hard time figuring out where we are now.”
Parker suggested that ride-hail companies think about how to make better matches. “One way to do it is to keep track of the drivers and move them down the priority list when they start exhibiting biases in any of the dimensions that we care about,” he said. “Another way to do it that’s not as punitive is to give some kind of a star or badge system that says ‘LGBT Friendly.’ Instead of punishing someone for being bad, reward someone for treating everyone the same, for acting in a way that we consider to be a pro-social way.”
Corporations need to take action in cases of persistent bias—and not just because it will affect their bottom lines, Parker says. “It’s important to try to make transportation easier and fairer.”
Powered by WPeMatico