Silicon innovation is colliding with jurisdictional steel
From DoNotPay's Supreme Court ambitions to the modern 'AI License Raj', regulatory frameworks are challenging the development of AI-driven services.
This article extends arguments I made in a recent Fox News op-ed. The Twitter thread I put together also explains the basic arguments. Retweets are always appreciated.
Joshua Browder’s offer was generous. $1 million is what you could make if you took AirPods into the Supreme Court and said exactly what his program asked. But as soon as the offer was extended, it was retracted by Browder, the CEO of DoNotPay.
DoNotPay made a name for itself by using AI to help people contest parking tickets, get refunds from flights, and cancel free trials. But the company was making the leap into the real world with its legal services. They had got someone to agree to use their service in a municipal traffic court and Browder was hoping to take the idea all the way to the top court.
The trial was originally scheduled to take place earlier in the year but just days before it was about to go down, DoNotPay pulled out of the trial. It seems that the threats from the California Bar Association that the company was operating without a law license stopped the project.
As Bobby Allyn of NPR reported,
In a statement, State Bar of California Chief Trial Counsel George Cardona declined to comment on the probe into DoNotPay but said the organization has a duty to investigative possible instances of unauthorized practice of law.
"We regularly let potential violators know that they could face prosecution in civil or criminal court, which is entirely up to law enforcement," Cardona said in a statement.
AI disruption met the hard wall of the law.
From the 1950s to the early 1990s, the Indian economy was under stringent government control and regulation via a system colloquially known as the Licence Raj or the Permit Raj. Businesses in India had to get all sorts of licenses to operate, and they were never particularly easy to obtain.
In much the same vein, today's technological landscape faces its own set of bureaucratic hurdles, reminiscent of the Licence Raj era. Let’s call it the AI License Raj. This contemporary version is woven through our regulatory structures, courts, and laws, persistently binding biological individuals to the law. The AI License Raj silently dictates terms, making it hard for AI to get adopted.
For example,
The U.S. District Court for the Eastern District of North Carolina upheld a North Carolina restriction on drone operators, claiming their maps and models amount to illegal “surveying.”
Physicians open themselves to liability concerns if they base their diagnosis on AI systems. As one report explained the current regime, “law incentivizes physicians to minimize the potential value of AI.”
If adopted, California’s AB-96 would give unions the ability to collectively bargain before any local government deploys automated transit vehicles.
Meanwhile, there is a massive fight over mandates to require two crew members on freight trains. European countries have gone to one person, and Australia is testing completely robotic systems, but the U.S. seems to be poised to make two a requirement.
Relatedly, a legal challenge against GitHub Copilot and OpenAI Codex has been filed in a US federal court through a class-action lawsuit. The claim includes accusations against GitHub, Microsoft, and OpenAI for violating open-source licenses.
In other words, the courts and the administrative state will act as a brake on AI adoption because AI systems will face difficulties in trying to get the proper license to operate.
Late last year, Sam Hammond warned of the flood that was coming with generative AI. To his credit, Hammond is right that “AI driven services [will] upend 20th century transaction cost structures and multiply the throughput demanded of our legacy institutions many thousands fold.”
But I think he takes it a bit too far in worrying that, “The courts, meanwhile, will be flooded with lawsuits because who needs to pay attorney fees when your phone can file an airtight motion for you?”
Only meatspace attorneys can file motions. An AI version would be immediately thrown out by the court. Indeed, this is exactly why DoNotPay pulled out of the experiment.
If they are to be deployed widely, AI systems will have to disrupt a massive system of licensure industry by industry.
Still, Hammond is right that institutions will need to change. Indeed, my latest op-ed is all about this. It is about the possibility of using AI to clean up government.
ChatGPT needs to be turned on the government. A ChatGVT is needed.
A ChatGVT could take any number of forms, as I wrote,
It could provide straight answers about the newest tax plan, if a bill is stuck in committee, or the likelihood that a piece of legislation will pass. Or a ChatGVT could be turned on the regulatory code to understand its true cost to households and businesses.
Understanding how laws, litigation, hearings, regulatory codes, and administrative actions intermingle can elude even the most experienced experts. The newest generation of Large Language Models (LLMs) appear to be quite effective at working through text with a little bit of tuning.
Using AI to turn law into code will mean that the true impact of government will be understandable and accessible. Most know that the burden imposed by regulation is colossal but the exact costs are hard to quantify. A ChatGVT could help sort out that problem.
Turning all of that text into computer code will also make reform easier because we will be able to subject it to software management practices, like refactoring. In refactoring, an existing body of computer code is simplified without changing its functional behavior.
A ChatGVT could be focused on refactoring the U.S. regulatory code.
Precedent already exists. During the Trump Administration, the U.S. Department of Health and Human Services (HHS) undertook a program to root out outdated and ineffective laws using AI tools. As a result of this project, HHS cleared a bunch of regulations from the books.
AI tools could also help government agencies upgrade their technology.
COVID made abundantly clear the cost of running on old systems. Old, outdated government systems meant that it became tougher to catch fraudsters as the claims rolled in. While numbers are hard to come by, estimates suggest that unemployment insurance paid $60 billion in fraudulent charges in 2021 alone.
Part of this cost is due to a problem known as technical debt. Businesses accumulate technical debt when they overlook infrastructure issues that could cause future complications. Governments also accrue technical debt when program infrastructure isn’t updated over time.
AI tools could make it much cheaper to upgrade to newer, more secure programming languages to fight waste, fraud, and abuse. Telecommunications and financial services firms have long used computer-aided technologies to detect transactional fraud, money laundering, identity theft, and account takeovers. Governments should be adopting these tools.
In an interview some years ago, the novelist Richard Powers responded with two lines that have always stuck with me. While he was talking about virtual reality, it felt as though he was talking about public policy:
The reason why the public is seduced by virtual reality, why it has embraced this fantasy of the disembodied self, is the desire for the ahistorical, the disembodied will. There is something in us that loves the idea of placing ourselves into immortal space, where our wishes can be met without the drag and impediment of stuff.
Tech policy especially is driven by “the desire for the ahistorical, the disembodied will.” Everyone loves the fantasy, “of placing ourselves into immortal space, where our wishes can be met without the drag and impediment of stuff.”
But the real world is full of impediments. It is full of stuff. When it comes to AI, this is where our attention should be focused.