keypoint-blogs

The Biden Administration’s New AI Guidelines

Written by Anne Valaitis | Apr 1, 2024 12:00:00 AM

 

Be sure to follow us on LinkedIn!

 

 

The White House is taking a step forward in managing the risks and benefits of artificial intelligence (AI) in government. On Thursday, March 28th, Vice President Kamala Harris announced that the Office of Management and Budget (OMB) is issuing its first government-wide policy on AI. It’s a key part of President Biden’s Executive Order on AI, which aims to make sure the government uses the technology in a safe, secure, and responsible way.

 

The new policy tells federal agencies what they need to do to address the risks of using AI. By December 1, 2024, agencies will have to put in place safeguards when using AI in ways that could impact Americans’ rights or safety. This includes assessing and testing AI systems, making sure they don’t discriminate against certain groups, and being transparent about how the government is using AI.

 

For example, when you’re at the airport, you’ll still be able to opt out of facial recognition without any delays or losing your place in line. If AI is used in healthcare to help with important decisions, a human will oversee the process to make sure the AI tools are accurate and fair. And if AI is used to detect fraud in government services, there will be human oversight and ways for people to seek help if the AI causes any harm.

 

The policy also tells agencies how to manage risks when they buy AI from outside companies. Later this year, OMB will make sure that agencies’ AI contracts protect the public’s rights and safety. They’re also asking for input from the public on how to make sure private companies working with the government follow best practices.

 

In addition to addressing risks, the policy aims to make the government’s use of AI more transparent. Agencies will have to publicly release information about how they’re using AI, including cases that impact rights or safety and how they’re addressing those risks. They'll also have to release government-owned AI code, models, and data, as long as it doesn’t pose any risks to the public or government operations.

 

Agencies are now encouraged to responsibly experiment with new AI technologies like generative AI, provided there are proper safeguards in place. Many agencies are already using AI chatbots to improve customer service and running other AI pilots.

 

To make sure agencies have the right people to build and use AI responsibly, the policy tells them to expand and train their AI workforce. The government has committed to hiring 100 AI professionals by Summer 2024 and is running a career fair for AI roles on April 18. The President’s budget also includes money to expand government-wide training programs for artificial intelligence.

 

Finally, the policy requires agencies to set up AI Governance Boards, led by top officials to oversee the use of AI across their agencies. Some departments, like Defense, Veterans Affairs, Housing and Urban Development, and State, have already set these boards up, and every major agency has to do so by May 27, 2024.

 

Keypoint Intelligence Opinion

So, what does this all mean? While this new policy is a significant step in the right direction, there’s still more work to be done. The government will need to continue engaging with the public, industry, and experts to make sure its use of AI is transparent, accountable, and beneficial to all Americans. It will also need to keep up with the rapid pace of AI development and adapt its policies and practices accordingly.

 

By setting clear guidelines and expectations for federal agencies, the new OMB policy lays the groundwork for a future where AI is used to improve government services, protect public safety, and advance the greater good—all while safeguarding the rights and interests of the American people.

 

It’s also going to be crucial for the government to work with industry experts, academics, and the public to create a full AI governance framework that works for everyone. The US has a chance to lead the way and set an example for the rest of the world on how to balance innovation with protecting the public interest and individual rights when it comes to artificial intelligence.

 

Browse through our Industry Reports Page (latest reports only). Log in to the InfoCenter to view research, reports, and studies on AI through our Artificial Intelligence and Workplace CompleteView Advisory Services. If you’re not a subscriber, contact us for more info by clicking here.