Episode 11: “State of the AI Union: Artificial Intelligence at the US Federal Level”
Listen to this episode:
Episode description:
In this episode I review the current developments (May 2023) in artificial intelligence at the United States government federal level.
Presidential Executive Order 13859 (2019): Maintaining American Leadership in AI
Presidential Office of Management and Budget (2020): Guidance for Regulation of Artificial Intelligence Applications
Office of the Director of National Intelligence (2020): Artificial Intelligence Ethics Framework for the Intelligence Community
Presidential Executive Order 13960 (2020): Promoting the Use of Trustworthy AI in the Federal Government
National AI Initiative Act, law passed 2021
Office of the Comptroller of the Currency (2021): Model Risk Management Handbook
U.S. Department of Health & Human Services (2021): Trustworthy AI (TAI) Playbook
Artificial Intelligence at the National Institutes of Health (NIH)
U.S. Equal Employment Opportunity Commission (EEOC) (2021): Artificial Intelligence and Algorithmic Fairness Initiative
White House Office of Science and Technology Policy (2022): Blueprint for an AI Bill of Rights
US Department of Defense (2022): Responsible Artificial Intelligence Strategy and Implementation Pathway
Algorithmic Accountability Act of 2022, proposed law
Library of Congress U.S. Copyright Office (2023): Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence
The National Institute of Standards and Technology (NIST): [1] [2]
National Artificial Intelligence Research Resource Task Force (2023): Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem
The US Patent and Trademark Office (USPTO): [1] [2]
Consumer Financial Protection Bureau (CFPB), Dept. of Justice’s Civil Rights Division, Equal Employment Opportunity Commission (EEOC), Federal Trade Commission (FTC) (2023): Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems
White House Statement (2023): FACT SHEET: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety
Oliver Patel: https://www.linkedin.com/in/oliver-patel/
Episode script:
Hello everyone. My name is Scot Pansing and this is AI Quick Bits, a podcast that breaks down various artificial intelligence topics of my choosing, into snackable brief episodes. Today I’m going to quickly cover, in May 2023, the state of federal level developments in the United States with respect to AI. It's not surprising that a lot of people assume nothing or very little is being done or thought about here, because most governments, including the United States, typically take a back seat to technological development – they usually let the market do its thing and then decide to regulate later. While this is not necessarily the case with AI, as I’ll outline there’s been a lot more positioning happening than rules and regulation.
And truth be told, Europe is set to broadly vote on a law that might have some real teeth – “The AI Act” – in June, just about a month away. This will kick off Season One of AI Legislation.
But this episode is only going to cover artificial intelligence at the United States federal level. I’m going to try to run through these items pretty quickly – I mean the podcast is called AI Quick Bits – and surprisingly there is a lot to get through here – and I’m leaving some stuff out – these are the “greatest hits.”
This episode would not have been possible without a post on LinkedIn from Oliver Patel which listed many of these items, and comments from others on the post with additional information. Okay here we go!
There are prior mentions of artificial intelligence from the federal government, but I’m starting with Presidential Executive Order 13859 from 2019: Maintaining American Leadership in AI. This document states principles and sets timelines for some initial standards setting, reports, and research. The main principles are:
The US must drive technological breakthroughs in AI across the federal government, industry, and academia.
The US must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies.
The US must train current and future generations of American workers with the skills to develop and apply AI technologies.
The US must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application.
The US must promote an international environment that supports American AI research and innovation and opens markets for American AI industries.
Broad strokes. Lead. Drive standards. Train workers. Trust & Safety. Make money.
The following year in 2020, the Presidential Office of Management and Budget issued Guidance for Regulation of Artificial Intelligence Applications. This is the first guidance on policy and regulatory approaches to AI applications developed and deployed outside of the federal government. And it pretty much backs up the Executive Order and states that these policies and regulations should not hamper innovation and stick to the principles.
Around this time the Department of Defense releases Ethical Principles for Artificial Intelligence, built on top of the U.S. military’s existing ethics framework based on the U.S. Constitution, Title 10 of the U.S. Code, Law of War, existing international treaties and longstanding norms and values.
Soon after, the Office of the Director of National Intelligence issued the Artificial Intelligence Ethics Framework for the Intelligence Community. This guide on how they should procure, design, build, use, protect, consume, and manage AI and related data is based on principles of:
Respecting the Law and Acting with Integrity
Being Transparent and Accountable
Being Objective and Equitable
Promoting Human-Centered Development and Use
Security and Resiliency
Being Informed by Science and Technology
Sounds good to me, Intelligence Community! So not long after Executive Order 13859, the Department of Defense and the Intelligence Community release formal ethics documents. And in December of 2020 we get another Executive Order: Promoting the Use of Trustworthy AI in the Federal Government: restating principles and setting some timelines for more reports and research.
Then in January 2021, Congress passes the National AI Initiative Act: which basically puts into law that certain task forces and research entities must be formed, these reports and standards must be created, etc.
Soon after that in 2021 the Office of the Comptroller of the Currency publishes their Model Risk Management Handbook and the U.S. Department of Health & Human Services puts out their Trustworthy AI Playbook. Organizations are starting to take the prioritization of AI seriously and develop materials that get their orgs in line with the principles.
I’d also like to mention that the National Institutes of Health – or NIH – who makes a wealth of biomedical data available to research communities and aims to make these data findable, accessible, interoperable, and reusable – has posted an abundance of information about their AI initiatives on their website.
But wait, there’s more! Also in 2021 the U.S. Equal Employment Opportunity Commission – or EEOC – launched an Artificial Intelligence and Algorithmic Fairness Initiative: the purpose of the initiative was to ensure that the use of software, including AI, machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces.
So, at this point some of the initial principles established around trust, privacy, and civil liberties are shining through, and then in October of 2022 the White House Office of Science and Technology Policy issues the Blueprint for an AI Bill of Rights which states:
You should be protected from unsafe or ineffective systems.
You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
To be clear, although these sound authoritative, and are coming out of the White House, they are not laws. This is guidance for civil liberties in the AI era.
Also in 2022 you have the US Department of Defense release their second AI principles document, the Responsible Artificial Intelligence Strategy and Implementation Pathway, and then we get a proposed law in Congress, the Algorithmic Accountability Act of 2022, which would require companies to assess the impacts of the automated systems they use and sell, create additional transparency about when and how automated systems are used, and empower consumers to make informed choices about the automation of critical decisions. The thing is, this act is stuck in committee, and doesn’t look like it will pass.
Now this next one, even though it is also not law, got lots of people to pay attention: in early 2023 the Library of Congress U.S. Copyright Office casually slips some guidance into the Federal Register newsletter called: Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence. This was a bit of a mic drop; they reminded everyone that only material that is the product of human creativity is eligible for copyright registration under U.S. law. They said they are seeing a spike in registration applications involving AI generated material, and even called out a specific example of a graphic novel containing imagery entirely created with Midjourney. They granted copyright for the story written by a human, but not for the images that were created with artificial intelligence.
I mentioned standards as one of the priorities and principles that the government established early on, and The National Institute of Standards and Technology (NIST) got straight to work – there’s a ton of information on their website – including the development of an Artificial Intelligence Risk Management Framework (AI RMF). And it’s around this time that the National Artificial Intelligence Research Resource (NAIRR) Task Force publishes something called Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem, basically outlining how a national cyberinfrastructure will be set up to democratize and accelerate AI research and development. Now we have a formalized framework on how these various AI-specific entities are going to communicate, cooperate, and innovate.
We’re almost done, hang in there! Similar to the Library of Congress U.S. Copyright Office, The US Patent and Trademark Office (USPTO) is openly discussing questions about what it means when AI gets involved with inventions and patents, and I’ve got links to these in the notes.
In April of 2023, in the wake of the generative AI explosion that began in the fall of 2022, we get a declaration from four federal enforcement agencies. A warning to the private sector: the absence of artificial intelligence specific legislation does not mean that existing laws won’t be exercised to their fullest extent over AI products and companies. This Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems came from the Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC).
And finally, just the other day (I’m recording this in May 2023), Vice President Harris and senior Administration officials met with the CEOs of Alphabet, Anthropic, Microsoft, and OpenAI. In the meeting the administration advocated for responsible innovation that serves the public good, while protecting our society, security, and economy. They further declared that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public. But the main thing to come out of this meeting? 140 million dollars in additional funding to launch seven new National Artificial Intelligence Research Institutes, bringing the total number of AI institutes to 25 across the United States.
So what does all of this mean? What’s the big picture here? Well, it means that so far the United States has no AI-specific laws on the books with any actual regulation. There is a lot of positioning and principles, many around safety and civil rights, which is great. And we absolutely need new standards developed and research bodies, and infrastructure for this all to thrive. I hope we can now swiftly move to the next phase, because artificial intelligence is currently moving with incredible velocity and remarkable bursts of acceleration, and is already beginning to affect society. Now it’s time to put together some effective legislation that clearly spells out what is okay, and what is against the law.
As I mentioned near the top of this episode, Europe is poised to do this first with “The AI Act” this summer, and it looks like the United States is planning to let them take the first steps and iterate from there. And that’s fine. I just hope the politics of US election timing do not get in the way, and we are not waiting until 2024 and beyond to see movement here. We just don’t have that kind of time, and relying on existing laws to police artificial intelligence can only get us so far. Also, the status of the Algorithmic Accountability Act is not encouraging. I’d like to see Congress either find a way to pass it, or kill it and move on to something that can get passed.
I hope you’ve found this breakdown helpful. That’s all for now and thanks for listening!