AI Surveillance in Schools: Why?
This past February 14th marked the one-year anniversary of one of the deadliest school shootings in the nation’s history. Despite being the subject of several 911 calls, and two separate tips to the FBI, the 19-year-old shooter, armed with a rifle, left 17 dead at Marjory Stoneman Douglas High School in Parkland, Florida. Since this attack, along with the growing number of incidents of mass violence, schools across the country have responded with a willingness to adopt unconventional products and practices to improve security. As a result, the rise in public education market led to an “arms race” in the security industry, and Big Tech is getting involved, bringing artificial-intelligence (“AI”) powered surveillance. However, is bringing AI surveillance to school campuses crossing the line?
Schools across the country have varied approach on what they are doing to provide security to students on campuses. Some have resorted to using more defensive personnel including: increasing police presence, operating a dedicated police station within the school, or hiring private security guards. Some schools have turned to reinforced or modified equipment, including installation of electronically controlled locks, and bulletproof doors and windows.
Few schools have turned to using advanced technology. For instance, in Lockport, New York school district monitors students with a facial-recognition system. An elementary school in Artesia, New Mexico, installed gunshot-detection devices, similar to equipment that is “used by the military to detect snipers and missiles.” In Michigan, Massachusetts, and California implemented artificial-intelligence software that scans student’s social media account for signs of potential threats, although this program is prone to produce false positives. This potentially may have influenced Parkland, Florida’s school district, Broward County Public Schools, decision last month to install Avigilon, an experimental AI powered surveillance system.
Avigilon is an AI service acquired last year by the tech giant Motorola Solution for one-billion-dollars. Unlike other AI programs, Avigilon does not use facial recognition technology, rather an “appearance search” which “would allow a school official to find or highlight people based on what they’re wearing.” Designers of Avigilon further stated that the program’s “security algorithms could spot risky behavior with superhuman speed and precision, potentially preventing another attack.” Although security is a top priority on school campuses, bringing AI to campuses is definitely going to far.
It should be noted that neither facial recognition nor Avigilon’s technology have been proven to deter school violence, while school shootings that produce multiple victims are still extremely rare. Alternatively, use of AI on school grounds presents privacy issues including: the implicit lack of parental consent to a child’s data being collected, how the data on students is being protected, and the possibility that collected information is being used for something else. Each year, millions of Americans are impacted by data breaches. Most notably, in 2017, Cambridge Analytica, a data firm gained accessed to 50 million Facebook profiles, and in 2018, the Equifax breach resulted in compromising the data of 148 million people. But the most significant issue with AI surveillance is that it does nothing to solve the core cause of school violence.
Christopher Yarnell, 21 February 2019