By Alia Wong
According to some statistics, sexual assault is virtually nonexistent at U.S. colleges and universities. An Inside Higher Ed survey of hundreds of college presidents last year found that only 6 percent of respondents believe it’s prevalent on their respective campuses. And according to a recent American Association of University Women (AAUW) analysis of newly released Education Department data, the vast majority of colleges and universities in the U.S.—91 percent—reported zero incidents of rape last year.
Few would doubt that these numbers seriously underestimate how often sexual assault happens in college. After all, it’s conventional wisdom that, nationally, one in five female students will experience sexual violence before graduating. “If the data was accurate it’d be something to celebrate, but we know it’s not accurate,” Lisa M. Maatz, the AAUW’s vice president of government relations, said earlier this month, commenting on the AAUW’s revelation.
The problem is that every statistic about campus sexual assault seems to be contradicted or challenged by another one. According to the results of a campus-climate survey released last October, Stanford University, for example, found that just 2 percent of respondents had experienced sexual assault since starting their degrees at the university, while another 14 percent had been subject to another form of sexual misconduct. Several activists were quick to condemn the university’s report as incomplete, “misleading,” and even “dangerous,” outrage that got the attention of a small handful of news organizations. A Buzzfeed report on the controversy pointed to separate data showing that 43 percent of the college’s undergraduate women had experienced a serious incident of sexual wrongdoing.
Challenges in data collection make it difficult to suss out what the actual sexual-violence situation is at Stanford—and that’s true for colleges nationwide. Government agencies and news organizations have long struggled to pin down the numbers. Take the one-in-five figure, which traces back to a 2007 Justice Department study and was widely relegated to “myth” status last year after a host of news organizations debunking it because it was based strictly on the survey responses from students at two universities. Months later, a spate of new analysesresurrected the number; The Washington Post, which conducted its own nationwide student survey, awkwardly retracted the “Pinocchio” rating its fact-checking arm had originally given the stat.
In response to the Clery data flaws, the AAUW has emphasized that campus-climate surveys—which unlike crime reports aren’t legally required of colleges and universities—offer a means of better understanding the actual dynamics at an institution. But surveys can also be incredibly imperfect. In a piece last September for Slate, The Atlantic’s Emily Yoffe questioned the reliability of a survey conducted last spring by the Association of American Universities (AAU), which had been described as one of the largest analyses of its kind ever done. The survey found that half to three-quarters of students at each of the 27 institutions included had experienced some form of sexual harassment, but even the study’s investigators have warned that the numbers aren’t necessarily nationally representative. As Yoffe suggested, surveys are typically voluntary: They’re representative of people who opt into them—those who’ve experienced sexual violence may, in theory, be more inclined to participate—and have nuanced interpretations of what different kinds of sexual violence entail. Just 150,000 of the 780,000 students offered the AAU survey responded to it.
Conducting such surveys is a delicate art in itself. Earlier this moth, the University of Southern California found itself at the center of controversy when ithad to publicly apologize to students for asking them about their sexual histories as part of a mandatory course on sexual violence.
And ultimately, public-relations concerns can disincentivize higher-education institutions from conducting surveys in the first place. In a piece for The Atlanticin 2014, Caroline Kitchener explained that encouraging students who’ve been violated to speak out can hurt a college’s reputation. “More transparency means more recorded victims,” she wrote.
More transparency, however, could also mean more confusion. Sexual assault is a tricky thing to define; it wasn’t until last year that the schools started reporting on domestic and dating violence and stalking in addition to rape collectively as assault. (The shift was required by the Violence Against Women Act, which was reauthorized in 2013.) Much of the disagreement over the numbers at Stanford for its part had to do with how the school defined “sexual assault.”
But as The Huffington Post’s Tyler Kingkade concluded back in 2014, as confusing as they are, the statistics aren’t the point. “The focus on campus sexual assault was never about statistics,” he wrote. “It is about students who said they were wronged by their schools after they were raped—in some cases saying that was worse than the assault itself.” When it comes to combatting campus sexual assault, advocates tend to agree that colleges not only need to develop rigorous, consistent, and tactful means of tracking incidents of sexual violence—they also need to improve the process of reporting such incidents, to make that process both more sensitive and more accurate. At this point, quantifying the sexual-assault problem on college campuses may be obscuring what matters most.
This article was originally published on TheAtlantic.com on January 26th, 2016