The New American | by Joe Wolverton II, J.D. | May 28, 2023
Here’s a chilling news item reported by the Ron Paul Institute for Peace and Prosperity:
Last week, the Dallas Independent School District was boasting about its new pilot project, undertaken along with the company Davista. The pilot project, the school district says, uses AI to extensively monitor each student and then sound the alarm if a student deviates from his “baseline” behavior.
There are so many disturbing and dystopian aspects to this information.
First, using AI — artificial intelligence — to monitor student behavior is allowing an incredibly complex and advanced algorithm (which is really all AI is at this point) to record children’s behavior, removing the human element altogether. The “alarm” that the surveillance system will sound is triggered not by a concerned parent or teacher or administrator, but by a computer that targets, tracks, and predicts the future behavior of children under its never-blinking electronic eye.
Anyone who pays attention to technology knows that the rapidly expanding use and misuse of AI is a topic of intense debate, with some experts in that field expressing concern that the technology is advancing too quickly and could some day — very soon — develop a means of decision-making beyond the control or coding of a human being.
Imagine a scenario in which the AI surveillance system of which the Dallas Independent School District is “boasting about” develops a “virus” or “Trojan horse,” and all the images it has recorded it then uploads to the dark web or some other equally ominous database. Would the apologies of the administration of the school district be sufficient to satisfy the concerns or squelch the rage of parents whose children now have their images available to anyone with access to the internet? In a cost-benefit analysis of this project, have the costs been reasonably calculated, and have they been made with an eye to protecting the innocence of the children being recorded?
Next, regardless of the soundness of the decision of the Dallas Independent School District’s leadership to install this surveillance system, parents bear significant responsibility for protecting their children’s innocence.
Has the school district contacted parents and informed them of the power of this technology and the potential hazards?
Is a way provided for parents to opt out of the constant monitoring if they decide that the risks outweigh the rewards of having their children’s every movement tracked and recorded by artificial intelligence?
Have parents — or faculty, staff, and administration for that matter — been briefed on the precise calculations used by this system to detect “deviations” in a child’s “baseline” behavior? Have they been briefed on how that baseline will be established? Have they been briefed on what the outcome of a child’s behavior being flagged by the surveillance system’s artificial intelligence? Have they been briefed on whether parents will be afforded access to the images recorded by the system? Have they been briefed on exactly who will have access to that data? Have the names of the programmers and developers been made known to parents so that they can determine the character of those who will be collecting and controlling images of their children?
Questions such as these must be asked and answered. By now, parents should know that the intents and goals of school superintendents and school boards and school administrators are very often completely contrary to their own.
Another critical concern is the “predictive” programming that is at the core of this artificial intelligence surveillance system.
The system in place in Dallas — and soon, undoubtedly, to be installed in classrooms all over the country — uses data analysis and algorithms to forecast and flag a child’s abnormal activity.
And what if that “abnormal” activity causes the computer to notify authorities who are required by state law to contact Child Protective Services (CPS) whenever there’s a question of a child’s unusual behavior, and then CPS shows up at the child’s home, removing the child from his parents’ care until such time as the cause of the “abnormal” behavior flagged by the AI surveillance system can be determined? Does anyone reading this article think that scenario sounds far-fetched?
As I mentioned above, the use of AI to predict behavior is problematic per se, but the system purchased and installed in Dallas schools — and any other similar system deployed at any other school — is proprietary. As such, there will be a lack of transparency in algorithmic decision-making, as the technology’s developers and programmers will not want to have the inner workings of these predictive systems revealed to competitors. This corporate competition will likely make it prohibitively challenging to assess the accuracy, fairness, and reliability of the system’s predictions.
There is another problematic aspect of this predictive policing.
In the United States, a person cannot be charged and tried for a crime unless mens rea and actus rea are established as facts.
Mens rea and actus rea are critical elements of criminal law that have existed for centuries in American and English law. The terms are taken from the Latin maxim: ‘Actus non facit reum nisi mens sit rea’ (a person is not guilty of an act unless his mind is also guilty).
Put simply, mens rea means a guilty thought (an intent to do something), and actus rea is the act itself.
With the directive to flag a child when his behavior deviates from a baseline and then notify human authorities so that they can intervene before the child commits some bad act, the AI surveillance system is dismantling 800 years of jurisprudence.
Finally, what about the constitutional impediments to this surveillance and to the predictive policing that makes this system so attractive to school districts?
I’m referring specifically to the Sixth Amendment. The Sixth Amendment reads, in relevant part:
In all criminal prosecutions, the accused shall enjoy the right … to be confronted with the witnesses against him….”
Now, I recognize that this isn’t the typical application of the Sixth Amendment, but being novel doesn’t make something wrong.
I would argue that a reasonable person could foresee a situation where a child is “observed” by the AI surveillance system committing something that would be considered criminal. That child is reported by the computer to whomever it is programmed to report to, and the police are calle
d to investigate. If the child is then arrested, how could the child — or the child’s parents or attorney — confront a computer? I mean, AI is so powerful now that even someone without any computer coding experience can duplicate voices and pictures so identical to the original as to fool family!
How could the accuracy of the AI’s data be verified? Would the child, through his parents or attorney, be afforded the right to confront all those involved in the programming and developing of the AI surveillance apparatus, or would a jury be left with a presumption of impartiality with respect to the computer?
Given the advancement not only in artificial intelligence technology, but in surveillance technology, as well, it is likely that many schools across the country will purchase these systems, installing them in the name of safety. But, who will be watching the watchers?