Critics are accusing the U.S. and state governments of overreaching by including limits on compute speeds in regulations and laws, which they claim will limit innovation. The U.S. government has multiple regulations to prevent the shipment of AI chips and supercomputing technology to China. The limitations are based on the performance of AI chips and supercomputing exceeding certain thresholds.
Critics specifically argue that the government’s specified limits on computing speed will stifle AI innovation.
These critics also argued that the government is too slow to keep up with chip innovations. Computing speed requirements are growing at a breakneck pace with chip innovations, and regulations and bureaucracy can’t keep up with the requirements to run the latest AI models.
A newer proposed regulation, in summary, involves U.S. citizens or permanent residents and supercomputers for China. The regulation sets a minimum compute speed of 100 petaflops, and the AI computing threshold is set at 10^26 FLOPs.
“The numbers may not hold in the future as the compute benchmark continues to move,” said the Center for a New American Security in a public comment responding to the proposed regulation.
About nine AI models meet the 10^24 threshold, five of which were released in 2024. China already has AI models coming close to the 10^26 threshold, which means U.S. regulators are already peddling out-of-date computing speeds in regulations.
“The 10^25 threshold would capture one company, ByteDance, that developed MegaScale, whose FLOP count is speculative,” CNAS said.
The 10^24 threshold has already been met by eight Chinese AI companies that developed nine models, including ByteDance, Alibaba, 01.AI, DeepSeek, XVERSE Technology, Shenzhen Yuanxiang Technology, Baidu, and Tigerobo.
The proposed law called “Provisions Pertaining to U.S. Investments in Certain National Security Technologies and Products in Countries of Concern,” requires U.S. persons with interests in Chinese entity’s AI systems to notify the U.S. government about transactions that may break existing regulations. The regulation sets the minimum performance thresholds.
The U.S. aims to prevent China from building hardware to advance its AI strategy. The U.S. claims advanced chips help China build AI systems and models that could pose a threat to national security.
“Such models can also enable next-generation military capabilities through improving the speed and accuracy of military decision-making and intelligence capabilities,” the U.S. Department of Treasury said in the proposed regulation.
According to a recent report in The Wall Street Journal, China has acquired banned Nvidia GPUs through an underground network of intermediary countries and suppliers.
Venture capital firm A16Z — which is run by Andreessen Horowitz — was blunt in its assessment of the U.S.’s overreach on computing speeds.
“A16Z has a number of serious concerns with the approach taken by Treasury here … particularly, the focus on computing power used to train the relevant AI model, which we believe is a misguided approach,” the company said.
A16Z has invested in 100 AI startups, and its concern was that the focus on computing power would only stifle innovation. AI is growing so rapidly that any amount of computing power isn’t enough.
“Many companies do not track in any detail the precise amount of computing power they use in training AI models,” the company said in its comment. “Carefully tracking the total computing power used is simply not a metric used by most AI companies.”
Calculating AI power is one thing, but AI startup owners — especially those of Chinese heritage – also won’t be able to track down all transactions involving China. Combined with the inability to track the computing power of AI models, AI founders will be jumping on too many landmines.
“The result of this is likely to be a significant chilling effect since U.S. person investors would be rightly reluctant to make investments where they could not know if the transaction was notifiable or prohibited,” a16z said.
The computing speed limits across regulations will confuse U.S. people, said the Information Technology Industry Council (ITI) in a comment posted to the proposed ANPRM regulation.
ITI recommended keeping the performance threshold at 10^26 FLOPs of compute but eliminating the dozens of other metrics in regulations. Some include network and memory bandwidth speeds.
ITI criticized the regulation itself, noting that “advanced packaging techniques are not covered by U.S. export controls and should therefore not be captured in the draft rule, as this creates an inconsistency within U.S. government regulation without a clear national security justification.”
Others criticized the regulation’s specifics that fall outside the scope of computing power. The main criticism revolved around the cost of tracking and reporting such transactions with China, which could run into the millions of dollars.
Others were concerned that the regulation was so vague that it would be easy for individuals to get caught up in it.
Companies are also concerned about California’s SB 1047, which would regulate AI models with safety mechanisms, compliance, and audits of LLMs. The goal is to ensure AI models aren’t misused and to hold LLM owners accountable for threats posed by AI.
The bill also calls for the creation of a state government-run cloud service called CalCompute that “advances the development and deployment of artificial intelligence that is safe, ethical and equitable.”
The taxpayer-funded cluster will be made available to researchers, scientists, and non-profit agencies, among others, to advance AI. The cluster will be a cloud service for the University of California system. Stanford University recently complained that it only had a limited number of GPUs on which to conduct research.
The CalCompute initiative has the technology sector scratching its head in many areas: How secure will CalCompute be? Will it be irrelevant in no time with new chip technologies and GPUs coming out every year?
The bill has undergone many amendments, including one change that would involve civil penalties, not criminal convictions.
OpenAI has opposed the bill, saying regulation isn’t good for AI development. Anthropic has taken a more diplomatic approach, telling Axios that some elements could cost the U.S. its AI leadership.
A state auditor in a 2022 agency audit said California’s Department of Technology (CDT) is poorly run, inefficient, and overspending.
The CDT did a poor job maintaining infrastructure and also didn’t have “a roadmap for prioritizing IT‑related needs—such as modernizing critical systems,” the report read.
The department did not ensure “that the State’s IT systems are adequately protected from cyberattacks that can compromise individuals’ identities,” the auditor said.