Will AI Agents Make The Perfect Contract?
And even if machines could negotiate the fine print, would it even be legal?
ICYMI: Earlier this week I submitted comments to OSTP on their AI deregulatory docket. My X thread can be found here.
At the Roots of Progress Conference earlier this month, Tyler Cowen interviewed OpenAI CEO Sam Altman, who at one point wondered about the world that was to come with AI agents. In the not too distant future, he imagined that AI agents would be involved in every aspect of business and would even negotiate with other agents autonomously, only involving their human counterparts when specific guidance was needed. It was a passing comment, but one that captured the scale of institutional change that agents might bring. But it also got me thinking about the legal and practical issues with AI contracting.
A couple of days after the talk, I saw that “Contractibility Design” by Roberto Corrao, Joel P. Flynn, and Karthik Sastry was posted on the National Bureau of Economic Research. Apparently, it has been floating around since 2023 but, truth be told, I probably wouldn’t have given it much attention if Altman’s comment wasn’t brewing in the back of my mind.
The paper deals with a long-standing problem in law and economics known as incomplete contracts. The way to think about incomplete contracts is to start with their antonym, complete contracts. Complete contracts are ideal in that they specify for both parties their rights and duties for every possible future state of the world. In the strictest sense, no contract is ever complete. However, as contracts become more clearly defined, they move further away from being incomplete. It’s also important to note that even if a contract is incomplete, that doesn’t mean it cannot be enforced by courts, and just because a contract is incomplete doesn’t mean it isn’t economically optimal.
Research on incomplete contracts can be traced to work from the criminally underrated economist Herbert Simon, who first modeled the employment relationship as an incomplete contract in his 1951 paper “A Formal Theory of the Employment Relationship.” But it took the work of Oliver Williamson and especially Oliver Hart, Sanford Grossman, and John Moore to flesh out the implications for ownership, control rights, and governance structures. In short, where markets fail to specify everything in advance, contracts often fill in the gaps.
What struck me about Corrao, Flynn, and Sastry’s approach is that they treat incomplete contracts like an optimization problem. In their terminology, coarse contracts are those contracts with low fidelity. Think of it like a low resolution, pixelated video. As you move toward a complete contract, resolution increases. They then use this idea to help understand why “even billion-dollar commercial contracts can be perplexingly imprecise and filled with phrases like ‘best efforts,’ ‘reasonable care,’ and ‘good faith.’” One might expect that with so much money on the line, parties would pay for absolute precision. Yet in practice, vagueness persists, often by design.
Corrao, Flynn, and Sastry show that the fuzziness is not a flaw, but a feature. It’s the result of balancing two opposing costs: front-end costs and back-end costs. Front-end costs, or ex ante costs, involve everything that comes with drafting contracts, foreseeing contingencies, and writing precise language. On the other hand, back-end costs, or ex post costs, come from generating evidence and proving breach after the action occurs.
Their central finding is counterintuitive. Even tiny front-end costs can make coarse, incomplete contracts optimal, even for high-stakes deals. As they write, “Thus front-end costs always lead to incomplete contracts: there are finitely many contingencies, but within each contingency many actions are legally permissible.” Surprisingly, back-end costs, no matter how large, do not cause contract coarseness. In fact, it’s the opposite: Back-end costs lead to complete contracts.
Like so much in economics, the difference comes down to marginal costs and marginal benefits.
Imagine you already have a contract with 10 levels of performance including acceptable, good, very good, and so on. Adding an 11th level by splitting good into good and slightly better than good requires writing rules distinguishing this new level from all the nearby levels above and below it. As you go down this route, the number of distinctions grows rapidly. Mathematically, it’s proportional to n². Still, the benefit is tiny because you’re only fine-tuning decisions for the narrow slice of cases that fall in between “good” and “very good,” which adds little value.
Back-end costs change the calculus entirely. Here, the principal pays only to verify the actions that actually occur, not every hypothetical contingency. Because the design of the contract shapes which actions are likely to arise, the costs and benefits remain roughly aligned. Greater precision pays off when it helps resolve real disputes. Put differently, front-end costs punish precision more than they help, while back-end costs rise in proportion to the value of that precision.
Reading Corrao, Flynn, and Sastry’s paper right after hearing Altman speak got me thinking, what happens when we throw AI agents into the mix?
In theory, large language models could drive down front-end costs. AI systems can already draft contracts, generate contingencies, and reconcile precedents at negligible marginal cost. They could also explore counterfactuals and write clauses for edge cases that humans would never think to include. If Corrao, Flynn, and Sastry are right, this technological shift should push us toward more complete contracts. And this wouldn’t be because humans suddenly became better lawyers, but because AI collapses the cost curve of precision.
Nevertheless, there are reasons to believe this might not happen. Even with low drafting costs, there may be computational limits to specifying all contingencies. Even more important, contracts are enforceable only when you can generate evidence showing the agent violated the terms. Data logs, model weights, or natural-language explanations aren’t likely to satisfy legal standards of proof. Moreover, many AI systems operate in probabilistic rather than deterministic ways, blurring the line between intentional and accidental.
Still, all of this assumes that AI agents are able to write contracts, which is a big if. Every US state prohibits non-lawyers from giving legal advice, drafting legal documents like contracts and representing others in legal matters via Unauthorized Practice of Law (UPL) statutes. Telling your AI agent to negotiate and draft a contract with a supplier is a textbook UPL violation because AI is drafting a legal document and acting on your behalf without being a licensed attorney.
Having two AI agents work together on deal terms is a little more complicated because parties can draft their own contracts without having a lawyer around. Still, I suspect that this contract would probably get thrown out in court because the AI is making legal judgments about contract terms and courts generally interpret UPL broadly.
Indeed, a bunch of cases brought against LegalZoom in the last two decades suggest that legal AI agents will face court scrutiny and probably won’t win. Caroline Shipman’s law review article “Unauthorized Practice of Law Claims Against LegalZoom—Who Do These Lawsuits Protect, and is the Rule Outdated?” offers a great overview of these cases, which are too complicated to review here. But courts have ruled against LegalZoom for answering specific questions about contract terms. I can’t imagine autonomous contract negotiation skirts UPL.
In other words, the very legal doctrines designed to protect consumers from unqualified advice may slow the evolution of more efficient, precise contracting. If AI could truly minimize front-end costs, it might finally fulfill the old dream of near-complete contracting. But for now, that world remains, in Altman’s words, “not too distant.”
Until next time,
🚀 Will



As with most gatekept professions, AI will be used legally--but only by the credentialed and under their signature.
Similar to how Intuit and others make a "pro" version of their tax software to be used by CPAs, there will be custom AI systems that won't balk at providing legal advice--as long as someone who signs their name "Esquire" is using it.
Already we're seeing cases where judges and bars are censuring lawyers for getting sloppy with AI and not checking the AI's work.