Press releases

Issue One applauds Executive Order on AI, urges Congress to pass responsible safeguards


Media Contact

Cory Combs

Director of Media Relations

In response to this week’s Executive Order announced by the Biden-Harris Administration to address the risks of artificial intelligence (AI), Issue One released the following statement from Council for Responsible Social Media Director Alix Fraser:

“Used responsibly, AI has the power to expand human potential, revolutionize industries across the globe, and even enhance democratic governance. AI also has the potential to irreversibly weaken our democracy and undermine U.S. national security. These threats underscore the urgent need for responsible safeguards to prevent the misuse of AI.

“This Executive Order is a positive step in the ongoing effort to adopt guardrails that will address the serious risks posed by AI, and we applaud the Biden Administration for taking necessary action to create government and industry standards for AI safety, security, and transparency. We commend the Administration for taking vital measures to establish reporting requirements for high-risk AI models, guidance for evaluating potentially harmful AI capabilities, and standards to detect, authenticate, and label synthetic content and combat AI-enabled fraud.

“While we welcome this important development, AI requires a rapid policy response at all levels of the federal government. We strongly urge leaders in Congress to work together in a bipartisan way to pass meaningful safeguards to prevent bad actors and foreign adversaries from using AI to spread deceptive and harmful content that could disrupt our elections, undermine our democratic processes, and further inflame partisan divides. This is an existential moment for our democracy, and we must act now.”

 

Background:

The White House Executive Order on AI announced on October 30, 2023, builds on themes from the previously released Blueprint for an AI Bill of Rights and focuses on safety, security, and trust in AI. The Executive Order requires more transparency from companies developing AI and introduces safety standards and guidance in a plethora of areas including AI-generated content.

The Executive Order:

  • Inducts the Defence Production Act and requires developers of high-risk AI systems that may pose a threat to national security to alert the federal government when training these models and requires the sharing of red-team safety tests.
  • Grants the Department of Commerce the authority to develop guidance for content authentication and watermarking for AI-generated content but does not mandate these standards for the tech industry or federal agencies.
  • Calls on Congress to pass bipartisan data privacy legislation and calls for agencies to conduct privacy-preserving research, evaluate, and provide guidance on how agencies collect commercial consumer data and how it’s being used.
  • Mandates clear guidance for AI usage in areas like housing, federal benefits, and federal contracts in order to prevent discrimination and bias in AI usage. It also mandates providing best practices for AI use in the criminal justice, education, and health system.
  • Encourages the hiring of AI professionals and promotes AI research through the National AI Research Resource (NAIRR) pilot. It also expands opportunities for skilled immigrants and nonimmigrants to work in the US.
  • Issues guidance for federal agency’s use of AI and expands international partnerships on accelerating AI standards development to ensure safe and trustworthy AI.
  • Protects Americans from AI-enabled fraud by establishing standards and guidance on authenticating, detecting, and labeling synthetic content.