Press releases

Issue One’s Council for Responsible Social Media partners with Cornell University and the U.S. Institute of Peace to host a “Peace Games” exercise with Members of Congress

The tabletop exercise brings together Congress and leading experts to respond to potential national security threats posed by social media and AI

Media Contact

Cory Combs

Director of Media Relations

Today, members of Congress from both parties came together to consider solutions to pressing national security challenges in a fast-moving and evolving information environment. The members analyzed a geopolitical crisis and heightened tensions between major military powers and the potential ramifications of social media and artificial intelligence on U.S. national security.

Cornell University and the U.S. Institute of Peace, in partnership with Issue One’s Council for Responsible Social Media (CRSM), designed the “Peace Games” exercise to allow lawmakers the important opportunity to work with one another in a bipartisan way to develop strategic responses to serious geopolitical challenges that could impact U.S. interests. The most recent game focused on a potential conflict between global military powers and how the information environment, and social media, could increase the possibility of conflict.

“It is essential that we prepare for national security threats involving the online spread of false or misleading information and abuses of advanced AI technology,” said former Lieutenant Governor of Massachusetts and CRSM Co-chair Kerry Healey. “We must find ways to reach across the aisle and have tangible, bipartisan solutions for these complex scenarios. Social media poses a major threat to our national security, and the introduction of new AI technologies will only make these threats more severe. That is why these types of bipartisan activities are so important.”

“There has already been a weaponization of deepfake technology that has impacted leaders around the world,” said former Congressman and CRSM member Steve Israel. “We saw an example in the early days of the Ukraine invasion, with a deepfake video of President Zelenskyy saying, ‘We surrender, put down your weapons’ that was picked up by social media platforms. And that isn’t the only example. As these technologies continue to mature, our national security strategies need to evolve to match the threats.”

With the advances in artificial intelligence, there has been a proliferation of deepfakes on social media that presents a serious challenge to U.S. national security. As demonstrated in the Peace Game, false and misleading information can often spread faster than the “truth,” creating serious consequences — including a heightened risk of major conflict. Foreign adversaries and other bad actors can manipulate social media to widely disseminate propaganda or other harmful content on popular platforms that could spiral out of control, escalating the chances of global conflict and in the long-term threaten democracy around the world.

“We have seen the power of deepfake images proliferating across social media. With a constantly shifting global order, it is vital that our leaders understand the national security risks of these technologies and the increased tensions they may cause,” said CRSM Director Alix Fraser. “Social media algorithms focus on maximizing our attention, ultimately pushing users toward more extreme content. Deepfakes and other advanced technologies only increase the risks of conflict and extremism on these platforms — we must have thoughtful strategies and policies to prevent the inherent national security harms of artificial intelligence.”

The Council for Responsible Social Media, a project of Issue One, launched in 2022 with a mission to address the harms that social media has for American kids, communities, and national security. Learn more about the Council.