The Future Is Here
We may earn a commission from links on this page

Biden Issues Nation's First AI Executive Order. Here's What You Need to Know

The first-of-its-kind order calls on tech companies to develop tools to ensure their AI systems are safe, trustworthy, and protect against unintentional bias.

We may earn a commission from links on this page.
Image for article titled Biden Issues Nation's First AI Executive Order. Here's What You Need to Know
Photo: Kevin Dietsch (Getty Images)

The Biden Administration is moving forward with a first-of-its-kind artificial intelligence executive order aimed at creating new “consensus industry standards” for developing safe and trustworthy and setting up new guardrails to prevent potentially disastrous misuse. Today’s executive order marks a key inflection point in AI regulation, as lawmakers in governments around the world grapple with how to best prevent harm in the emerging and unpredictable technology.

AI experts were divided on Monday. Many expressed cautious optimism over the Biden order’s direction, while others dinged the government’s approach for relying too heavily on the voluntary good graces of multi-billion dollar tech companies.

Advertisement

Biden’s order breaks down into eight categories:

  • New Standards for AI Safety and Security
  • Protecting Americans’ Privacy
  • Advancing Equity and Civil Rights
  • Standing Up for Consumers, Patients, and Students
  • Supporting Workers
  • Promoting Innovation and Competition
  • Advancing American Leadership Abroad
  • Ensuring Responsible and Effective Government Use of AI

On the standard issue, the order will ask major AI companies to share their safety test results with the government and develop new tools to ensure AI systems are safe and trustworthy. It’s also calling on AI developers to develop a variety of new tools and standards to protect against all sorts of AI doomer catastrophe scenarios, from AI-generated bioweapons to AI-assisted fraud and cyber attacks. The heads of multiple relevant government agencies will be tasked with providing an annual report assessing potential dangers AI poses to their specific areas of critical infrastructure. Those reports will include an assessment of ways AI could be deployed to make infrastructure more vulnerable to critical failures.

Advertisement

AI developers moving forward will be required to share safety test results with the federal government if it’s determined their tools could pose a national security risk. The Biden administration is using the Defense Production Act, originally passed in 1950, to enforce those requirements.

Advertisement

“Leveraging the Defense Production Act to regulate AI makes clear the significance of the national security risks contemplated and the urgency the Administration feels to act,” International Association of Privacy Professionals Vice President and Chief Knowledge Officer Caitlin Fennessy said in an emailed statement.

Advertisement

Federal government agencies will work alongside private industry here. The National Institute of Standards and Technology (NIST), for example, will be responsible for developing standards for “red teaming” AI models before their release to the public. The Department of Energy and Department of Homeland Security, meanwhile, will look into the potential threats to infrastructure and other critical systems.

Biden calls for new initiatives to identify AI-generated material

Moving forward, the order will require the Secretary of Commerce to develop guidance for labeling AI-generated content, a practice often referred to as watermarking. The Secretary will need to produce a report identifying existing capabilities for identifying AI-generated media and labeling it, as well as identifying any tools capable of preventing generative AI from being used to create Child Sexual Abuse Material. AI-generated content depicting children in sex acts has begun to flood the internet due to a lack of clear laws or regulations.

Advertisement

The White House hopes this focus on watermarking and proper labeling will “make it easy for Americans to know that the communications they receive from their government are authentic,” according to a fact sheet released Tuesday. They could have a long way to go though. Current generative AI detectors have proved inconsistent. OpenAI recently admitted its own AI text detector doesn’t really work.

Elsewhere, the order focuses on the efforts to prevent AI from being used to discriminate against people. Similarly, the order calls for the development of new criminal justice standards and best practices to determine how AI is used in sentencing, parole, and pretrial release, as well as in surveillance and predictive policing. The text of the order notably does not call for outright bans of any of these use cases, which some privacy advocates have hoped for.

Advertisement

“It is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse, including in the justice system and the Federal Government,” the administration wrote in a copy of the executive order obtained by Gizmodo. “Only then can Americans trust AI to advance civil rights, equity, and justice for all.”

The administration went on to say the federal government will take action to ensure data collected by AI companies and retention of that data is “lawful, secure, and promotes privacy.” Here, once again, the Biden administration faces an uphill battle. Content creators across multiple mediums have already sued OpenAI and other AI makers over claims they improperly harvested copyrighted material to train their algorithms. Studies have also shown how AI models can be used to infer personal attributes about supposedly anonymous users.

Advertisement

The Biden order also emphasizes the importance of offering workers, many of whom may be impacted by AI advancements, a seat at the table. The order says AI models should not be developed in ways that undermine workers’ rights, worsen job quality, encourage undue surveillance, or lessen market competition.

To that end, the Biden administration is late to the party. ChatGPT-style large language models have already contributed to layoffs in a variety of white-collar industries in recent months, from sales to journalism. All signs appear to point to that trend getting worse.

Advertisement

Biden’s order similarly spends considerable space on the need for the U.S. to attract more AI talent from overseas to maintain an edge in the global AI tech race. Part of that recruitment drive means easing up on immigration restrictions. The order calls for a streamlining of visa petitions and applications for non-U.S. citizens interested in working on AI projects in the U.S. It also instructs the Commerce Secretary to consider a program focused on identifying and attracting top AI talent around the world.

“The focus on AI governance professionals and training will ensure AI safety measures are developed with the deep understanding of the technology and use context necessary to enable innovation to continue at pace in a way we can trust,” Fennessy added.

Advertisement

In addition to AI uses, Biden’s order also tries to address the data used to train increasingly powerful models. The order specifically calls government agencies to evaluate how they collect and use commercially available information, including those procured from data brokers.

The order builds off previous voluntary commitments from seven of the world’s leading AI firms around watermarking and testing requirements. Those “commitments” essentially amount to self-policing on the part of tech giants. This order, by contrast, alters the weight of the executive pen, though it’s unclear how far government agencies will go to punish firms deemed out of step with the new guidelines. Like all executive orders, Biden’s AI initiatives are subject to being made irrelevant if he fails to win reelection and his successor decides to reverse course.

Advertisement

Still, White House Deputy Chief of Staff Bruce Reed told CNBC he believes these and other guidelines included in the order mark “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”

Experts say the order is a step in the right direction, but more hard policy details are needed

AI experts in the field have so far responded to the order with cautious optimism. The Biden order may lack specific details, but it nonetheless signals a “whole of government” approach to address AI regulation. That’s particularly reassuring given the difficulty of passing meaningful AI legislation in a politically divided Congress.

Advertisement

“The Executive Order seems on track to represent a remarkable, whole-of-government effort to support the responsible development and governance of AI,” Center for Democracy and Technology CEO Alexandra Reeve Givens said in an emailed statement. “It’s notable to see the Administration focusing on both the emergent risks of sophisticated foundation models and the many ways in which AI systems are already impacting people’s rights—a crucial approach that responds to the many concerns raised by public interest experts and advocates.”

But not everyone walked away satisfied with the order. Privacy advocates criticized the lengthy order for refusing to take a stronger position one way or another on the efficacy of AI models that have already been used to surveil or discriminate against Americans. Some, like Mozilla President Mark Surman, also questioned the order’s lack of emphasis on open-source AI initiatives, which have been a major area of divergence between AI makers like Meta and Google.

Advertisement

“I do wish we had seen something on open source and open science in the Executive Order,” Surman said in an emailed statement. “Openness and transparency are key if we want the benefits of AI to reach the majority of humanity, rather than seeing them applied only to use cases where profit is the primary motivator.”

Others, like Surveillance Technology Oversight Project Executive Director Albert Fox Cahn, blasted the order, arguing it was a missed opportunity to take a hardline stance against invasive AI technologies like facial recognition and other biometric surveillance systems.

Advertisement

“Many forms of AI simply should not be allowed on the market,” Fox Cahn said in a statement. “And many of these proposals are simply regulatory theater, allowing abusive AI to stay on the market.

Maybe most critically, the sweeping executive order still lacks much in the way of teeth to actually hold AI firms accountable. For now, the federal government is hoping they can walk in lockstep with major AI firms, some of which appear eager to embody the “move fast and break things” tech ethos.

Advertisement

“The White House is continuing the mistake of over-relying on AI auditing techniques that can be easily gamed by companies and agencies,” Fox Cahn added.

Update 4:45 PM EST: Added details from the executive order and statements from AI experts.