When it comes to new technology, it’s been said government initially stays uninvolved – then gets too involved. The White House’s guidelines for federal agencies on AI regulations, issued this week as part of the American AI Initiative announced last February, can be viewed as a government still in the uninvolved stage.
If companies pouring billions into AI look to the guidelines for insight regarding federal regulatory guardrails, they’ll be disappointed. Some will view that as a good thing – industries in emerging sectors prefer a light, laissez faire touch from Washington. On the other hand, lack of definition can make for mischief. Let’s take the hypothetical of a plaintiff declaring that an AI provider is in violation of, say, the ninth (of 10) White House AI guidelines, which reads in part:
Safety and Security
Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process.
No one would object to that. On the other hand, it’s guidance that can be easily, unintentionally and routinely violated, offering up a field day for the litigious. But for now, the Administration – with an eye toward the national security and economic competitive implications of global AI leadership – has given the industry and regulators directives low on restrictions. Perhaps later, as AI matures, future Administrations, Congresses and the courts may adopt a more focused and active role.
Speaking to the media this week, U.S. Chief Technology Officer Michael Kratsios said the Administration requires agencies issuing regulations impacting AI developed by private sector companies to account for each of White House’s 10 guidelines. The overriding principle, Kratsios said, as quoted in Federal News Network, is to “maintain and strengthen the U.S. position of leadership” on AI.
“The U.S. AI regulatory principles provide official guidance and reduce uncertainty for innovators about how the federal government is approaching the regulation of artificial intelligence technologies,” Kratsios said, according to Federal News Network. “By providing this regulatory clarity, our intent is to remove impediments to private-sector AI innovation and growth. Removing obstacles to the development of AI means delivering the promise of this technology for all Americans, from advancements in health care, transportation, communication — innovations we haven’t even thought of yet.”
Deputy U.S. CTO Lynne Parker said the guidelines are designed to be flexible.
“While there are ongoing policy discussions about the use of AI by the government, this action in particular though addresses the use of AI in the private sector,” Parker said. “It’s also important to note that these principles are intentionally high-level. Federal agencies will implement the guidance in accordance with their sector-specific needs. We purposefully want to avoid top-down, one-size-fits-all blanket regulation, as AI-powered technologies reach across vastly different industries.”
In abbreviated form, here are the guidelines (the full draft version, published here, will be finalized after a 60-day public comment period):
Public Trust in AI: AI is expected to have a positive impact across sectors of social and economic life, including employment, transportation, education, finance, healthcare, personal security, and manufacturing. At the same time, AI applications could pose risks to privacy, individual rights, autonomy, and civil liberties that must be carefully assessed and appropriately addressed… The appropriate regulatory or non-regulatory response to privacy and other risks must necessarily depend on the nature of the risk presented and the appropriate mitigations.
Public Participation: Public participation, especially in those instances where AI uses information about individuals, will improve agency accountability and regulatory outcomes, as well as increase public trust and confidence. Agencies should provide ample opportunities for the public to provide information and participate in all stages of the rulemaking process….
Scientific Integrity and Information Quality: The government’s regulatory and non-regulatory approaches to AI applications should leverage scientific and technical information and processes. Agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality, transparency, and compliance.
Risk Assessment and Management: Regulatory and non-regulatory approaches to AI should be based on a consistent application of risk assessment and risk management across various agencies and various technologies. It is not necessary to mitigate every foreseeable risk; in fact, a foundational principle of regulatory policy is that all activities involve tradeoffs. Instead, a risk-based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits.
Benefits and Costs: When developing regulatory and non-regulatory approaches, agencies will often consider the application and deployment of AI into already-regulated industries… Agencies should, when consistent with law, carefully consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of AI applications. Such consideration will include the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace, whether implementing AI will change the type of errors created by the system, as well as comparison to the degree of risk tolerated in other existing ones.
Flexibility: When developing regulatory and non-regulatory approaches, agencies should pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications. Rigid, design-based regulations that attempt to prescribe the technical specifications of AI applications will in most cases be impractical and ineffective, given the anticipated pace with which AI will evolve and the resulting need for agencies to react to new information and evidence.
Fairness and Non-discrimination: Agencies should consider in a transparent manner the impacts that AI applications may have on discrimination. AI applications have the potential of reducing present-day discrimination caused by human subjectivity. At the same time, applications can, in some instances, introduce real-world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI. When considering regulations or non-regulatory approaches related to AI applications, agencies should consider, in accordance with law, issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application….
Disclosure and Transparency: In addition to improving the rulemaking process, transparency and disclosure can increase public trust and confidence in AI applications. At times, such disclosures may include identifying when AI is in use, for instance, if appropriate for addressing questions about how the application impacts human end users… Agencies should carefully consider the sufficiency of existing or evolving legal, policy, and regulatory environments before contemplating additional measures for disclosure and transparency.
Safety and Security: Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems.
Interagency Coordination: A coherent and whole-of-government approach to AI oversight requires interagency coordination. Agencies should coordinate with each other to share experiences and to ensure consistency and predictability of AI-related policies that advance American innovation and growth in AI….