Imagine a future in which AIs automatically interpret—and enforce—laws.
All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.
Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.
This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.
In New York, AI systems equipped with facial recognition technology are being used by businesses to identify shoplifters. Similar AI-powered systems are being used by retailers in Australia and the United Kingdom to identify shoplifters and provide real-time tailored alerts to employees or security personnel. China is experimenting with even more powerful forms of automated legal enforcement and targeted surveillance.
Breathalyzers are another example of automatic detection. They estimate blood alcohol content by calculating the number of alcohol molecules in the breath via an electrochemical reaction or infrared analysis (they’re basically computers with fuel cells or spectrometers attached). And they’re not without controversy: Courts across the country have found serious flaws and technical deficiencies with Breathalyzer devices and the software that powers them. Despite this, criminal defendants struggle to obtain access to devices or their software source code, with Breathalyzer companies and courts often refusing to grant such access. In the few cases where courts have actually ordered such disclosures, that has usually followed costly legal battles spanning many years.
AI is about to make this issue much more complicated, and could drastically expand the types of laws that can be enforced in this manner. Some legal scholars predict that computationally personalized law and its automated enforcement are the future of law. These would be administered by what Anthony Casey and Anthony Niblett call “microdirectives,” which provide individualized instructions for legal compliance in a particular scenario.
Made possible by advances in surveillance, communications technologies, and big-data analytics, microdirectives will be a new and predominant form of law shaped largely by machines. They are “micro” because they are not impersonal general rules or standards, but tailored to one specific circumstance. And they are “directives” because they prescribe action or inaction required by law.
A Digital Millennium Copyright Act takedown notice is a present-day example of a microdirective. The DMCA’s enforcement is almost fully automated, with copyright “bots” constantly scanning the internet for copyright-infringing material, and automatically sending literally hundreds of millions of DMCA takedown notices daily to platforms and users. A DMCA takedown notice is tailored to the recipient’s specific legal circumstances. It also directs action—remove the targeted content or prove that it’s not infringing—based on the law.
It’s easy to see how the AI systems being deployed by retailers to identify shoplifters could be redesigned to employ microdirectives. In addition to alerting business owners, the systems could also send alerts to the identified persons themselves, with tailored legal directions or notices.
A future where AIs interpret, apply, and enforce most laws at societal scale like this will exponentially magnify problems around fairness, transparency, and freedom. Forget about software transparency—well-resourced AI firms, like Breathalyzer companies today, would no doubt ferociously guard their systems for competitive reasons. These systems would likely be so complex that even their designers would not be able to explain how the AIs interpret and apply the law—something we’re already seeing with today’s deep learning neural network systems, which are unable to explain their reasoning.
Even the law itself could become hopelessly vast and opaque. Legal microdirectives sent en masse for countless scenarios, each representing authoritative legal findings formulated by opaque computational processes, could create an expansive and increasingly complex body of law that would grow ad infinitum.
And this brings us to the heart of the issue: If you’re accused by a computer, are you entitled to review that computer’s inner workings and potentially challenge its accuracy in court? What does cross-examination look like when the prosecutor’s witness is a computer? How could you possibly access, analyze, and understand all microdirectives relevant to your case in order to challenge the AI’s legal interpretation? How could courts hope to ensure equal application of the law? Like the man from the country in Franz Kafka’s parable in The Trial, you’d die waiting for access to the law, because the law is limitless and incomprehensible.
This system would present an unprecedented threat to freedom. Ubiquitous AI-powered surveillance in society will be necessary to enable such automated enforcement. On top of that, research—including empirical studies conducted by one of us (Penney)—has shown that personalized legal threats or commands that originate from sources of authority—state or corporate—can have powerful chilling effects on people’s willingness to speak or act freely. Imagine receiving very specific legal instructions from law enforcement about what to say or do in a situation: Would you feel you had a choice to act freely?
This is a vision of AI’s invasive and Byzantine law of the future that chills to the bone. It would be unlike any other law system we’ve seen before in human history, and far more dangerous for our freedoms. Indeed, some legal scholars argue that this future would effectively be the death of law.
Yet it is not a future we must endure. Proposed bans on surveillance technology like facial recognition systems can be expanded to cover those enabling invasive automated legal enforcement. Laws can mandate interpretability and explainability for AI systems to ensure everyone can understand and explain how the systems operate. If a system is too complex, maybe it shouldn’t be deployed in legal contexts. Enforcement by personalized legal processes needs to be highly regulated to ensure oversight, and should be employed only where chilling effects are less likely, like in benign government administration or regulatory contexts where fundamental rights and freedoms are not at risk.
AI will inevitably change the course of law. It already has. But we don’t have to accept its most extreme and maximal instantiations, either today or tomorrow.
This essay was written with Jon Penney, and previously appeared on Slate.com.
Click to Open Code Editor