How much might AI legislation cost in the U.S.?
A review of the cost from three AI rules + a sensitivity analysis from LLMs
ICYMI: Utah is considering a bill that would force social media companies to share social graph data. My recent op-ed in the Deseret News, and its accompanying tweet thread, explains this bill and why social media interoperability isn't the best path forward. I also had an op-ed in City Journal explaining how supersonic jets could transform air travel.
Policymakers are rushing to regulate artificial intelligence (AI), but the economic impact of these regulations remains largely unexplored. While the European Union and the United Kingdom have produced cost estimates, recent developments in the United States offer important new benchmarks. Recent amendments to the California Consumer Privacy Act (CCPA) and regulations implementing President Biden’s Executive Order on AI offer crucial insights into what businesses might expect to pay for compliance. The financial burden could be substantial, running into billions of dollars across the economy. Especially as states push to adopt AI bills, understanding these costs is essential for crafting regulations that balance innovation, safety, and economic viability.
Still, these compliance cost estimates are notoriously unreliable. As an alternative approach, I tested whether large language models (LLMs) could provide more realistic estimates by simulating compliance scenarios in the final section of this post. I prompted ChatGPT, Claude, and Grok to act as compliance officers at companies subject to new CCPA provisions and a Bureau of Industry and Security (BIS) rule, asking each to estimate hours needed for first-year implementation and ongoing compliance.
The big takeaways:
For California's risk assessment regulation, Claude and Grok project 400-580 hours will be needed for the first-year of compliance (vs. the official 120 hours) and 150-240 hours thereafter (vs. the official 18-36 hours annually). ChatGPT estimates the time at 90-250 hours initially and 40-150 hours for each additional year.
For the automated decision-making provision of the CCPA, Claude and Grok project 450-730 hours for first-year compliance, far exceeding the official 360-hour estimate. While ChatGPT suggests lower initial costs (80-300 hours), all three LLMs predict significantly higher ongoing annual costs than official projections.
For the BIS reporting rule, Claude projects 1,140 hours for first-year compliance (3x the official estimate of 333 hours), while ChatGPT estimates 280 hours and Grok projects 380 hours. All three LLMs agree ongoing compliance will require substantial resources.
In short, the LLMs consistently predicted much higher compliance costs than the official sources, suggesting a systematic underestimation.
The following analysis unfolds in three parts: The first section examines the compliance costs of California's new regulations. The second section analyzes the BIS's proposed AI reporting requirements. The final section tests whether large language models can provide more realistic cost estimates than traditional regulatory impact assessments.
While these AI-generated estimates aren’t perfect, they suggest that LLMs might eventually serve as virtual simulation environments that capture more diverse business experiences than traditional estimation methods. I’m hopeful that they could bring much-needed empirical grounding to regulatory cost assessment.
The costs of CCPA’s new provisions
In September 2024, Governor Gavin Newsom of California signed into law three bills that amended the California Consumer Privacy Act of 2018 (CCPA). Among other changes, California now requires companies that are governed under the CCPA to prepare a cybersecurity risk assessment (RA) as well as give consumers the ability to opt-out of automated decision making technology (ADMT). As part of the rulemaking, California commissioned a standardized regulatory impact assessment (SRIA), which was prepared by economists at Berkeley Economic Advising and Research and California State University, to estimate the cost of the new regulations. The following analysis examines these official government estimates, which were created through traditional economic modeling methods, not using LLMs. (The LLM-based cost estimation approach is explored later in section 3 of this post.) Since the RA and ADMT provisions have been included in other AI bills, including Colorado’s AI bill and a proposal in Connecticut, the assessment offers a lower bound for what AI regulation might cost in other settings.
The risk assessment regulations require businesses to conduct risk assessments when their processing of personal information presents significant risk to a consumers’ privacy. This includes businesses that sell or share personal information (PI), process sensitive PI, or use ADMT for significant decisions. As part of these new requirements, businesses will need to explain the purpose of processing and the categories of information processed. They will also need to stop their processing if risks outweigh benefits. To prove their compliance, businesses will now be required to submit certifications and abridged versions of their risk assessments to the state government annually.
The rules for ADMT are also involved. Businesses will need to inform consumers about their use of ADMT with a Pre-use Notice and then give consumers a chance to opt-out when ADMT is used for significant decisions. This includes access to financial services, housing, education, and employment; extensive profiling; or training uses of ADMT.
The SRIA's calculations for the cost of the two provisions were based on what fraction of all CCPA-covered businesses would also be subject to these specific requirements, yielding three scenarios:
Low scenario: 25% of all CCPA-covered businesses would be subject to these provisions (13,082 out of 52,326 businesses)
Medium scenario: 50% of all CCPA-covered businesses would be subject to these provisions (26,163 out of 52,326 businesses)
High scenario: 100% of all CCPA-covered businesses would be subject to these provisions (all 52,326 CCPA-covered businesses)
After talking with people in the industry, the SRIA’s economists estimated that 120 hours would be needed to complete the organizational and regulatory requirements for the RA for the first year for each firm. To determine an appropriate wage rate, they analyzed the different types of professionals who would likely be involved in implementing the regulations, resulting in a blended hourly rate of $56.40, based on “an hourly wage of $25.91 for office and administrative support occupations, $42.67 for compliance officers, and $100.61 for lawyers.”
Total first-year costs were calculated by multiplying the per-business cost by the number of businesses in each scenario, for a low range of $89 million, a medium range of $177 million, and a high range of $354 million.
For ongoing costs in subsequent years, the average of these three one-year costs, which was roughly $207 million, was then used to calculate a low range and a high range. As was explained in the SRIA, “We estimate that subsequent RAs—including both new RAs for new PI-processing, as well as reviews and updates to existing PI-processing—will represent 15–30% of total year one compliance costs, with the higher compliance cost threshold occurring in earlier years before gradually falling.”
The table below captures the first year estimates and ongoing costs.
The SRIA is a little more opaque about the paperwork cost for ADMT. The assessment notes that the “compliance estimates come from our own understanding of the regulatory delta and how it relates to the other hourly estimates we derived from industry experts on the RA and CSA regulations.” This part of the assessment assumes that programmers will need between 240 hours and 360 hours to complete the changes. Assuming the software developer hourly rate of $91.14, this corresponds to $21,874 to $32,810 in first-year direct compliance costs per business. Much like the RA regulations, the study estimates “that subsequent years will represent 15% – 30% of total year one compliance costs, with the higher compliance cost threshold occurring in earlier years before gradually falling.”
Putting it all together,
“First-year total costs are comprised of approximately $369M in costs associated with updates to CCPA regulations, $2.0B in costs associated with CSA, $207M in costs associated with RA, and $835M in costs associated with ADMT. While CCPA updates do not have estimated ongoing costs, there are ongoing annual costs associated with each of other elements including CSA (estimated range of $308M-$615M per year), RA (estimated range of $31M–$62M per year), and ADMT (estimated range of $125-$250M per year).”
To be honest, I’m not a huge fan of how this study averaged the low, medium, and high scenarios for business compliance. While it is common to game out high, medium and low scenarios in these analyses and do averaging, it is critical that we estimate a best guess upper bound, as well as a best guess middle bound, and lower bound. If a rule is meant to apply to every business, then our top line estimate should reflect that assumption.
To be clear, this doesn’t mean that we shouldn’t vary the compliance costs by firm. In fact, the estimate for ADMT does exactly this, varying the number of hours needed to complete the requirement between 240 hours and 360 hours. Rather, it means we need to take this 360-hour estimate and compute its absolute top end cost through all calculations.
I would go further and vary the wage rate as well. Every SRIA estimate assumes that the work will be done in-house but many small businesses will need to contract out the work to specialized firms. These costs won’t be a fraction of an employee’s time but an ongoing yearly contract.
Contractrates.fyi, a site that crowdsources contract data, puts the cost of cybersecurity consultants at $148 per hour, almost triple the cost of the SRIA for the risk assessment part. This would mean that businesses would have to spend $17,760 to be compliant, not $6,768. Similarly, the market rate for frontend engineers comes to $395 per hour, which is over four times the official rate for programmers. Cost estimates for full compliance of all business at a market rate are tabulated below. They still include low and high scenarios, which assume that ongoing yearly costs might be either 15% or 30% of the average cost of the first year.
While the official study does use these numbers as inputs in a macroeconomic model, they don’t estimate the total cost of the rules in current dollars. Similar to discounted cash flow (DCF) analysis, we can think of regulation as a stream of future costs that a regulation is expected to generate over the 10 year period. By summing them all up and then discounting them back to present value using a discount rate that reflects the costs’ long-term time horizon, we can estimate the current market value of the regulation. The table below calculates those regulatory costs using the federal two percent discount rate for three scenarios.
Using the SRIA’s numbers, the ten year discounted cost for the risk assessment provision is currently valued between $4 billion and $7 billion, while the cost for the ADMT rule is between $18 billion and $28 billion. But those numbers jump if we assume that every business needs to be compliant. In this case, the RA is valued between $8 billion to $12 billion, and the ADMT could mean $31 billion to $49 billion in current costs. In the most extreme case, the market rate for the cost of RA compliance could be $32 billion, while ADMT rules might be currently valued at $210 billion.
Truth be told, my preferred cost estimate is the middle tranche, the one that models full compliance using the official wage rates rather than market rates. This approach acknowledges that all covered businesses will eventually need to comply, without making the extreme assumption that all will need to hire outside consultants at premium rates.
Admittedly, those are eye popping numbers, but I think it reflects what’s being proposed. If regulators truly want to mandate a comprehensive AI governance regime across the entire economy, we should expect substantial costs. None of this is to say that the costs might not be justified. But policymakers need to grapple with a realistic estimate of economic burden, especially given the uncertain nature of AI risks and the potential for compliance costs to disproportionately impact smaller innovators.
Proposed Department of Commerce regulations
Besides the newest CCPA provisions, proposed rules from the Bureau of Industry and Security (BIS) offer another look at how much AI legislation might cost. The proposed rules, which were part of Biden’s Executive Order on AI, would require covered U.S. persons with advanced AI models or clusters exceeding the technical thresholds to notify the BIS on a quarterly basis. It would also require covered persons to respond to follow-up inquiries by BIS within 30 days.
As noted in the official filing, “BIS estimates the specific survey required by this proposed rule will have an estimated burden of 5,000 hours per year aggregated across all new respondents,” which the agency estimated was “between zero and 15 companies” at the time of writing. Assuming that 15 companies would have to shoulder those 5000 hours, each company would be burdened with 333 hours of compliance, similar to what California estimates for risk assessments.
However, in docket filings for the rulemaking, the US Chamber, TechNet, and the Computer and Communications Industry Association all said that the 5,000 hours significantly underestimated the actual burden of quarterly reporting. Peko Wan, CEO at Pundi X, a company that enables retail stores to accept crypto payment, also thinks the number is too low. She told the trade publication PYMNTS that companies could be burdened between $570,000 and $815,500 annually to comply with the new mandates.
Estimates of paperwork burden hours are known to have problems. In 2018, the GAO reviewed 200 estimates to find that 76 of them did not convert burden hours into dollar costs, which was also the case for the BIS rule. Importantly, the report “found that one [information collection request] underestimated burden by as much as $270 million, and another overestimated burden time by more than 12 million hours.” Internal review processes failed to catch significant errors.
Agencies often lack standardized, data-driven methods for estimating paperwork burden and instead rely on the personal judgment of program analysts. As a result, burden estimates are “not based on objective, rigorous, or internally consistent methodologies” in many cases. Officials themselves admit skepticism about their own numbers, and in practice, there is little process to verify or validate an estimate once a collection is underway.
LLM estimate of costs
LLMs might offer a pathway towards better estimates. I’ve been especially interested in a recent paper from Park et. al. (2024) that “simulates the attitudes and behaviors of 1,052 real individuals—applying large language models to qualitative interviews about their lives, then measuring how well these agents replicate the attitudes and behaviors of the individuals that they represent.”
This approach could be adapted to simulate how individuals and businesses interact with regulatory paperwork, capturing a more diverse range of experiences than traditional estimation methods. By modeling various user personas that differ by industry, LLMs could generate more nuanced time and cost estimates that reflect real-world variability. While not a complete solution, this method could complement existing approaches by introducing more consistency and empirical grounding to what has largely remained a subjective process.
As a first test of the idea, I prompted ChatGPT, Claude, and Grok to act as a compliance officer at a company being regulated by the two CCPA provisions and the BIS rule, and then asked each LLM to estimate the amount of hours needed to comply in the first year and every year after that. The full chats are linked at the end of this post. The three estimates, which include the RA provisions, the ADMT provisions, and the BIS rule, are captured below, along with the official estimates.
The data reveals striking disparities between official government estimates and those generated by large language models, suggesting that agencies may be systematically underestimating compliance burdens for new AI regulations.
Some things I noticed:
Estimates for California’s risk assessment (RA) regulation vary widely. Both Claude and Grok estimate that 400-580 hours will be needed for the first-year and 150-240 hours for every year after that for the rule, compared to the SRIA’s appraisal of 120 hours for the first year and 18-36 hours for every additional year. ChatGPT, however, puts first year costs both below the SRIA’s estimates (90 hours) and above it (250 hours), while also putting each additional year’s cost between 40 hours and 150 hours, far higher than the official tally.
Similarly, for the automated decision-making technology (ADMT) provision, both Claude and Grok significantly exceed the government’s 360-hour upper limit for first year costs, for a range of 450-640 hours and 480-730 hours, respectively. On the other hand, ChatGPT puts ADMT costs lower than official estimates for a range of 80-300 hours. Nevertheless, ChatGPT still places a higher cost for each additional year (40-150 hours) compared to official numbers (36-108 hours).
Overall, California’s estimates appear to be particularly optimistic about ongoing compliance costs, while ChatGPT, Claude and Grok all predicted much higher ongoing costs.
For BIS reporting requirements, Claude projects 1,140 hours will be needed for first-year compliance, putting it more than three times the official estimate of 333 hours. Consistent with the two other estimates, ChatGPT predicts lower costs (280 hours) than the government’s projection, while Grok’s estimate (380 hours) aligns closely with the official figure. Importantly, all three LLMs consistently project that ongoing compliance will require substantial resources.
While all three LLMs acknowledge that company size, existing legal infrastructure, and rule complexity will significantly impact the hours required, they differ in how they break down the specific tasks. Grok provides the most granular analysis, detailing specific regulatory requirements from the relevant sections of the proposed rules. Claude offers more comprehensive subcategories of work, particularly in training requirements and technical evaluation aspects. ChatGPT takes a broader approach with fewer categories but provides practical context about why the hours are necessary.
To further the analysis, the LLMs cost estimates for RA and ADMT have been converted into discounted regulatory costs using the wage rates from the CCPA section above. The table below charts those costs.
Again, what’s noticeable is that cost projections vary widely. California’s own SRIA estimates for RA compliance puts the cost between $4.4-$7 billion over ten years, while Claude and Grok project costs 10 times higher at $47.3-$74.1 billion. Even the larger SRIA full compliance scenario ($7.6-$12.0 billion) falls far short of the LLM projections. The gap is even more pronounced for ADMT rules, where Grok’s upper estimate of $177.4 billion is more than six times California’s official upper bound of $28.3 billion. Notably, while ChatGPT provides more conservative estimates than the other LLMs, it still projects significantly higher costs because of higher ongoing costs.
All of these estimates are heroic; there’s no doubt about that. But the LLMs provide a gut check, and consistent among them all, it seems that official projections may be overly optimistic. While LLM-generated estimates cannot be considered definitive, they offer a valuable alternative perspective.
Especially for policy analysts, there’s promise with these tools. Instead of relying on analysts’ best guesses, large language models could simulate diverse scenarios that cost estimates are meant to capture, from the small business owner encountering a complex tax form for the first time to the compliance professional at a large corporation managing routine filings. By creating realistic digital twins that reflect varying levels of expertise, available resources, and familiarity with government processes, we could finally generate paperwork burden estimates that capture the true range of human experiences. These AI simulations wouldn’t replace traditional methods entirely, but they could inject much-needed rigor and consistency into a process that has remained stubbornly subjective for decades.
All of the data can be found here. The complete LLM conversations are available in the following links: ChatGPT CCPA compliance chat; Claude CCPA compliance chat; Grok CCPA compliance chat; ChatGPT BIS compliance chat; Claude BIS compliance chat; and Grok BIS compliance chat.