School surveillance never protects children from shooting

If we are Schools from kindergarten to high school will soon be in order to believe in the providers of school surveillance systems. Minority report, Interested personWhen Robocop.. A “military grade” system will swallow student data, pick up mere hints of harmful thoughts, and send officers to act vulgarly before becoming a perpetrator. In the unlikely event that someone can evade the prediction system, it will inevitably be blocked by next-generation weapon detection systems and biometric sensors that interpret the way people walk and tone, warning authorities of imminent danger. .. The final layer can be the most technologically advanced. There are even some types of drone or robot dogs that can disarm, distract, or disable dangerous individuals before actual damage occurs. If we invest in these systems, our thinking will change and our children will finally be safe.

Not only is this not our present, but no matter how vast and complex our surveillance system may be, it will never be our future.

Over the past few years, many companies have sprouted and promised various technical interventions to reduce or eliminate the risk of shooting at school. The proposed “solution” is a tool that uses machine learning and human surveillance to predict violent behavior, artificial intelligence combined with a camera that determines an individual’s intent through the language of the body, and voice. It extends to Mike, who identifies potential violence based on tone. .. Many of them use the ghosts of dead children to hawk their skills. For example, surveillance company AnyVision uses images of Parkland and Sandy Hook shootings in a presentation touting face and firearm recognition technology. Shortly after the Uvalde shooting last month, Axon announced plans for a drone equipped with a taser as a means of dealing with school shooters. (The company later suspended the plan after the members of the ethics board resigned.) The list goes on and makes us believe that each company has a solution to this problem on its own.

The failure here is not only in the system itself (for example, Uvalde seemed to implement at least one of these “security measures”), but also in the way people think of them. As with policing itself, failure of a monitoring or security system usually requires more extensive monitoring. When dangers are not predicted and prevented, companies often mention that they need more data to address system gaps, and governments and schools often accept it. In New York, the mayor has decided to double the need for even more surveillance techniques, despite the many failures of surveillance mechanisms to prevent (or catch) recent subway shooters. Meanwhile, city schools are reportedly ignoring the moratorium on facial recognition technology. New York Times Schools in the United States report spending $ 3.1 billion on security products and services in 2021 alone. And Congress’s recent gun control includes an additional $ 300 million to increase school safety.

But fundamentally, many of these prediction systems promise a measure of certainty in improbable situations. Technology companies consistently market complete data, and thus the concept of complete systems, as being just above the next ridge. This is an environment that is fully monitored to prevent violence, as any antisocial behavior can be predicted. But a comprehensive dataset of ongoing human behavior is like the horizon. It can be conceptualized, but it never really reaches.

Companies are currently working on a variety of bizarre techniques to train these systems.Others use action movies like John wick, Most good indicators of real life. At some point, it sounds creepy, but these companies may train their systems based on data from actual shootings. Still, even if footage of the actual incident becomes available (and if these systems require a large amount), the model cannot accurately predict the next tragedy based on the previous tragedy. Uvalde was different from Sandy Hook, which is different from Columbine, and Parkland, which is different.

Technologies that provide predictions about intent and motivation, regardless of source, always make statistical bets on specific future probabilities based on incomplete and uncontextual data. The basic premise when using a machine learning model is that there is a pattern to identify. In this case, there are some “normal” behaviors that the archer shows at the crime scene. However, it is unlikely that you will find such a pattern. This is especially true given the fact that teenage lexicons and customs are changing almost continuously. Perhaps more than many other segments of the population, young people are changing the way they speak, dress, write, and express themselves. Often, it is explicit to avoid and avoid the cautious eyes of adults. It is almost impossible to develop a consistently accurate model of that behavior.