On August 15th, I drove four hours to Sunrise, Florida to attend a meeting of the Marjory Stoneman Douglas High School Public Safety Commission. On the agenda that afternoon was a discussion of the Florida Schools Safety Portal (FSSP), originally introduced as a massive centralized database that would integrate student data from multiple state agencies, anonymous tip platform data, mental health data, and social media posts to identify students who may be on the path to committing mass school violence. I had just completed a project on this policy during a summer fellowship at the Aspen Tech Policy Hub, and I knew both the hope and the limits of this type of technology. Though I am happy to report that the current version of the safety portal has more limited functionality — working as a dashboard that allows authorized individuals on school-based threat assessment teams to access data from the systems they already had access to — the optimism I encountered in the original presentation makes me worry about the faith we are placing in technology to solve very human problems.
As I listened to the FSSP presentation and the discussion that followed, it was evident the commission and state agencies — while well-intentioned — do not have the technological literacy to understand what it takes to build the FSSP as initially envisioned. More alarmingly, they do not understand its potentially catastrophic social implications. The “details” the presenters shared consisted of multiple infographics that seemed to show that bringing enormous amounts of data into one place would lead to quickly identifying and stopping students who may pose a threat to school safety. This is a hopeful but ultimately simplistic view about how technology works. It speaks to the misconceptions of policymakers regarding data as an approach to stop school shootings.
In the three minutes I had to speak, I shared as much as I could of the research I conducted for my fellowship. The facts do not offer a lot of hope for a big data solution to school violence.
First, little to no research has been conducted on using data and predictive analytics to stop school shootings. A data-driven approach is one of the least common categories of technology used for school safety. It is important to use evidence-based research to inform policy and when it is not, lawmakers are merely experimenting.
Second, it is nearly impossible to accurately predict school shootings because the data is sparse regarding these events. Any statistical model that uses so few data points would not be reliable or valid. The probability of mis-flagging a student as a shooter would be greatly increased in the system.
Third, the data sources included in the FSSP are biased. Sources of data include the Florida Department of Education, Department of Children and Families, Department of Law Enforcement, Department of Juvenile Justice, anonymous tip platform data, and social media monitoring data. Public data sets such as these are known to be biased against race, gender, and socioeconomic status, so guardrails would need to be put in place to ensure a fair system.
Fourth, there is the issue of discriminatory effects that can result when individuals on threat assessment teams are not reflective of the student populations whom they serve. They may have conscious or unconscious biases against certain groups of people which results in unfair outcomes. This is important to note for the FSSP, because 55% of K-12 students in Florida are African-American and Latinx and could be disproportionately impacted.
Finally, in addition to technological and fairness issues, policymakers must take into account data security, privacy, and civil rights of students — a first step has been taken by the safety commission with plans for law enforcement to complete FERPA training.
I am a passionate believer in the power of data, and have dedicated my career to it. But no matter how much we want to believe that technology may be harnessed to address mass school violence, a database will not stop the next school shooter. To date, there is no evidence that data allows us to accurately predict whether someone will cause violence.
There is a lack of transparency around the decision-making models in the FSSP, a system that will make decisions that will affect students’ lives. They (and their parents) have a right to a clear explanation about how it works. What is certain in all of this is that data sources, algorithms, and systems have shown biases toward certain individuals and groups, leading to incorrect conclusions and negative outcomes. Until issues of bias in data-driven school safety systems are resolved, policymakers should not require that they be implemented with students at our nation’s schools.
While I sympathize with those who believe in the power of data to predict human behavior, the science simply isn’t there. The FSSP is at best a placebo that will prove ineffective for stopping school mass violence. At worst — and unavoidably so — it will wrongly single out students who have done nothing wrong, possibly marking them for life.
Ora D. Tanner is currently a doctoral candidate at the University of South Florida (USF) and plans to graduate with her PhD in Instructional Technology in Spring 2020.She spent the summer in San Francisco as a fellow at the Aspen Tech Policy Hub where she completed a technology policy project on data-driven school safety initiatives. She previously worked as a physicist in nuclear medicine, a science educator, and as a graduate researcher on NSF-funded grant projects related to digital game-based learning and assessment in science education.