Project
Big Tech’s Broken Promises
A project of Issue One, the Big Tech’s Broken Promises tracker catalogs the history of public proclamations and policy changes announced by the largest technology companies that purported to protect users, prioritize vulnerable communities, or safeguard the broader information ecosystem in which democracies operate. Each of these changes was announced publicly, only to be later retracted, significantly altered, marginalized, or never come to fruition. Many are half-truths or deflections that hide a different reality. And while these broken promises are not a reflection on the employees of these platforms — who work hard to build safe and healthy systems — they are illustrative of the broader incentives driving these corporations.
We hope this tracker will inform lawmakers, advocates, researchers, and platform users as they seek to apply new oversight measures to these companies. Thank you to the organizations whose diligent, thoughtful work contributed to this tracker, including but not limited to Accountable Tech, the Anti-Defamation League, Center for Countering Digital Hate, Institute for Strategic Dialogue, and Tech Transparency Project.
Check out the tracker’s data dictionary for explanations of categories, dates, and types of announcements.
Tracker last updated in November 2024. Questions? Contact us at techreform@issueone.org.
Meta
Facebook uses a product called XCheck, which is supposed to apply additional oversight to accounts with large followings, such as politicians.
Source 1However, internal research found that XCheck had evolved into a “white list” that exempted these types of accounts from company policy, meaning that 'VIP' accounts were able to harass, incite violence, and violate company policy without consequence. This company analysis went so far to call the program a “breach of... Read more
Source 1Source 2
Product Launch
Trust and Safety
Meta
A blog post on Meta’s efforts to protect the 2020 elections emphasizes the company’s “responsibility to stop abuse and election interference on our platform.” Since 2018, Facebook’s election policies have prohibited “threats of violence relating to voting, voter registration or the outcome of an election.”
Source 1However, an unpublished document from the House Select Committee on the January 6 Attack found that in the weeks after the election, hundreds of Facebook groups coordinated efforts to “stop the steal,” often calling for violence within the groups.
Source 1Public Announcement
Elections, Radicalization/Extremism
Meta
According to their community standards, Meta removes “ideologies that promote hate, such as Nazism and white supremacy.”
Source 1Not only were multiple Nazi and white supremacist accounts active on Facebook and Instagram, but they were successfully soliciting donations from these accounts.
Source 1Terms and Conditions
Radicalization/Extremism
Meta
When someone enters their job, interests, or location on their Facebook profile, Facebook automatically creates a linked page for them if there is not one already. A whistleblower petition flagged that not only did Facebook fail to ban designated terrorist groups, it auto-generated hundreds of pages for groups such as... Read more
Source 1Source 2
Terms and Conditions
Radicalization/Extremism
Meta
Despite this, multiple scams targeted migrants through Facebook and WhatsApp, according to the Tech Transparency Project. Multiple accounts across the U.S. and Canada purporting to be immigration experts promoted a visa scam that took users to sites collecting personal information.
Source 1Terms and Conditions
Trust and Safety
Meta
Facebook’s policy on human exploitation disallows content that “offers or assists in the smuggling of humans”
Source 1A Tech Transparency Project investigation found active buy-sell groups on Facebook where coyotes offered a menu of services, including helping people cross the border. TTP found WhatsApp was also a frequent tool used by human smugglers to communicate with migrants.
Source 1Terms and Conditions
National Security
Meta
Facebook launched a new feature, Group Experts, which allows administrators of Facebook Groups to distinguish members with authority/credibility.
Source 1Facebook’s ‘Group Expert’ feature has been given to multiple anti vaxxers who spread disinformation about the COVID-19 pandemic.
Source 1Product Launch
Trust and Safety
Meta
Meta’s policy on dangerous individuals and organizations does not allow hate organizations or organizations that may intend to coerce civilians or the government. According to internal documents leaked by the Intercept, organizations like the Three Percenters are banned on the platform.
Source 1Source 2
An investigative piece in WIRED found that militia extremists such as the Three Percenters have been organizing through Facebook groups, using it to recruit members and coordinate anti-government activities across the country. The article counted more than 200 extremist groups, some with thousands of members.
Source 1Terms and Conditions
Radicalization/Extremism
Meta
Meta promised to “[label] state-controlled media on their Page and on [their] Ad Library” as part of their efforts to protect the 2020 elections.
Source 1Meta has not been able to keep up with the scale of foreign influence operations. Research from the Center for Countering Digital Hate finds that “the vast majority (91%) of posts containing content from Russian state media are not covered by this policy and do not carry labels.”
Source 1Public Announcement
National Security
Meta
In 2016, Facebook acquired CrowdTangle to help researchers study the platform, including providing crucial real-time content analysis. A Facebook blog post from 2020 underscored the importance of this tool. “Supporting independent research through data access, training, and resources is critical to understanding the spread of public content across social media... Read more
Source 1In March 2024, Meta announced that it would be phasing out CrowdTangle (effective August 2024) and replacing it with a more restricting and limited tool. This decision, which comes in a year in which nearly have the world's population will vote in at least 64 elections across the world, can... Read more
Source 1Source 2
Source 3
Source 4
Product Launch
Elections, Transparency
Meta
Meta published a one-pager on their plans to help combat election and voter interference in the 2022 midterm elections. Among their listed measures were improving researcher access and increasing transparency about political advertising.
Source 1However, the AP outlined multiple actions that would deter these outlined goals. The website CrowdTangle, which provides third-party entities such as researchers and journalists to analyze and fact-check Facebook posts, has sometimes been inoperable.
Source 1Public Announcement
Elections, Transparency
Meta
In August 2020, Meta said they would take action against “accounts tied to offline anarchist groups that support violent acts, including US-based militia organizations and QAnon.”
Source 1Public Announcement
Hate Speech
Meta
Meta claims that it clearly labels all election-related and issue ads on Facebook and Instagram in the U.S., including by putting a "paid for by" disclosure from the advertiser at the top of the ad.
Source 1Facebook and Instagram permitted unlabeled advertisements from PragerU Kids, an arm of the right-wing nonprofit media organization, PragerU — which is named after conservative talk radio host and co-founder Dennis Prager. The organization has made videos arguing against the $15 minimum wage, questioning climate change, and in support of increased gun ownership.
Source 1Terms and Conditions
Transparency
Meta
Between 2018 and 2020, Facebook published at least 15 blog posts highlighting their efforts to remove coordinated inauthentic behavior from Iranian state-backed actors.
Source 1In 2020, Facebook removed fake accounts spreading Iranian messaging that had been operating since 2011, bringing into question just how effective previous takedowns were.
Source 1Public Announcement
National Security
Meta
Facebook’s then-COO Sheryl Sandberg testified before the Senate Intelligence Committee that the company was “investing heavily in people and technology to keep our community safe and keep our service secure.”
Source 1Many of these investments were cut in 2023, when much of the tech sector downsized key integrity or content moderation teams. Reports of substantial layoffs impacted key departments, such as those that address misinformation and trust and safety.
Source 1Congressional Testimony
Trust and Safety
Meta
Meta CEO Mark Zuckerberg’s written testimony to the House Energy & Committee Committee claims that Facebook “remove[s] language that incites or facilitates violence, and [bans] Groups that proclaim a hateful and violent mission from having a presence on our apps.”
Source 1Zuckerberg testified to this despite internal findings that suggested otherwise. Internal Facebook research from 2018 warned that the algorithm was designed to push “more and more divisive content in an effort to gain user attention & increase time on the platform.” A 2016 presentation from a Facebook employee found that... Read more
Source 1Source 2
Source 3
Congressional Testimony
Hate Speech
Meta
CEO Adam Mosseri explained that terms in violation of Instagram’s community guidelines were “removed from Instagram entirely” and therefore not findable via the search engine.
Source 1Facebook's search feature will automatically suggest and auto-fill terms. When the Anti-Defamation League (ADL) entered the names of recognized hate groups into the search bar, they found that hate-related terms were automatically suggested, including five supposedly banned by the platform. In total, the ADL found 40 accounts and 71 hashtags... Read more
Source 1Source 2
Public Announcement
Hate Speech
Meta
In the two years leading up to the 2020 election, Meta released more that 30 statements explaining which measures the platform was taking to mitigate misinformation, foreign interference, and hate speech relating to the election.
Source 1Despite their claims, a report by the online advocacy group Avaaz found that Facebook only ramped up its efforts to combat election-related false information in the few weeks leading up to the election. Avaaz estimates that Facebook could have prevented more than 10 billion views on the top 100 election-related... Read more
Source 1Public Announcement
Elections
Meta
Meta’s hate speech policy disallows attacks or generalizations based upon a person’s protected characteristics.
Source 1A report from the Institute for Strategic Dialogue (ISD) on online gendered abuse in 2022 found, of all the platforms studied by ISD, Facebook hosted some of the highest rates of misogynistic and abusive activity after the reversal of Roe v. Wade. 34% of the top posts on the topic... Read more
Source 1Terms and Conditions
Hate Speech
Meta
Prior to 2022, Meta’s policy on political ads prohibited claims on Facebook or Instagram that the election was stolen or fraudulent.
Source 1Terms and Conditions
Elections, Trust and Safety
Meta
To address misinformation about climate change on its platform, Meta said it would attach labels to posts discussing climate change.
Source 1The Center for Countering Digital Hate identified a ‘Toxic Ten’ of climate disinformation spreaders on the platform who were responsible for 69% of users’ interactions with climate change. Facebook failed to label 92% of this content.
Source 1Public Announcement
Trust and Safety
Meta
Per Meta’s policies, advertisements targeted to minors may not promote “products, services or content that are inappropriate, illegal, or unsafe, or that exploit, mislead, or exert undue pressure on the age groups targeted.”
Source 1Research from the Center for Countering Digital Hate found that ads promoting abortion reversals, a dangerous procedure, were attached to 83% of searches for abortion on the platform. CCDH estimates that minors saw these types of ads over 700,000 times.
Source 1Terms and Conditions
Trust and Safety
Meta
Buying or selling user privileges on Facebook, Instagram, or WhatsApp is explicitly prohibited in Meta’s spam policy.
Source 1The Tech Transparency Project found hundreds of Facebook Groups designated to buying and selling Facebook manager accounts. Many of these accounts had been approved to run ads on political and social issues, which may have attracted buyers looking to interfere in elections.
Source 1Terms and Conditions
Transparency, Trust and Safety
Meta
Per Instagram’s community guidelines, “it's never OK to encourage violence or attack anyone based on their…sex, gender, gender identity, sexual orientation,” including the use of the word "groomer" to describe anyone from the LGBT community. Per Meta’s Advertising Standards, all ads on Instagram must adhere to the platform’s community guidelines.... Read more
Source 1Source 2
Source 3
Media matters found that ads misusing the term in anti-LGBT ads still ran, garnering almost 1 million impressions from 63 ads.
Source 1Terms and Conditions
Hate Speech
Meta
Analysis from Media Matters found that “Meta has earned at least $397,500 from 450 ads pushing anti-immigrant “invasion” rhetoric since October 2023.” Of the 450, many also contained white nationalist rhetoric.
Source 1Terms and Conditions
Hate Speech
Meta
Meta has touted that “no tech company does more or invests more to protect elections online.”
Source 1Despite this claim, the European Union is investigating Meta for its failure to comply with the Digital Services Act, specifically its failures to mitigate risks to electoral processes on the platform.
Source 1Public Announcement
Elections
Meta
Meta banned the word ‘groomer’ when referring to the LGBT community, claiming this misuse of the word violated its hate speech policies
Source 1Despite this policy, Instagram still chose to reinstate the account ‘Gays Against Groomers,’ an account that repeatedly attacked the LGBT community and alleged they were “normalizing pedophilia.” Media Matters reports that Gays against Groomers repeatedly equated the community as groomers, as well as mentally and morally deficient.
Source 1Public Announcement
Hate Speech
Meta
A 2019 blog post explicitly banned white nationalism and separatism, including groups associated with these ideologies.
Source 1The Anti-Defamation League found that 69 of the 130 known hate groups had a presence on Instagram. 51 of them were findable by search on Facebook, despite all of them in violation of Meta’s policies.
Source 1Public Announcement
Hate Speech
Meta
After the 2018 Christchurch attack, where an Islamophobic massacre was live streamed on Facebook, Meta promised to "start connecting people who search for terms associated with white supremacy to resources focused on helping people leave behind hate groups."
Source 1The Anti-Defamation League investigated this policy by searching for 130 known hate groups on Facebook. They found that only 20 (15%) of the searches produced a warning label or redirected the search.
Source 1Public Announcement
Radicalization/Extremism
Meta
In 2021, Instagram launched the ability to add links to Stories, framing it as a feature for “businesses, creators and change-makers” to “inspire their communities.”
Source 1The feature often allows users to spread disinformation, and even profit from it. The journalism watchdog group Media Matters tracked instances of antivaxxergroups using the feature to organize in-person events, spread misinformation, and sell anti-vaccine merchandise.
Source 1Product Launch
Trust and Safety
Meta
In 2021, in response to racist abuse being thrown at Black footballers in the UK, Instagram claimed to have strengthened their policies to protect against “common antisemitic tropes and other types of hate speech.”
Source 1Instagram allowed posts declaring it “antisemitic month” to remain on the platform and found that they were not in violation of community guidelines when first reported.
Source 1Public Announcement
Hate Speech
X (Twitter)
A blog post about Twitter’s efforts regarding the 2020 election promises “tweets meant to incite interference with the election process or with the implementation of election results, such as through violent action, will be subject to removal [including] all Congressional races and the Presidential Election.”
Source 1Leaked documents from the Jan. 6th committee reveal that Twitter whistleblowers voiced concerns about the platform being used to incite violence. While whistleblowers repeatedly requested that Twitter take action against “coded incitements to violence” on the platform, no substantial action was taken by management.
Source 1Public Announcement
Elections, Radicalization/Extremism
X (Twitter)
Twitter said it would take action against links containing content that promotes hateful conduct.
Source 1More than a year after this post, the journalism watchdog group Media Matters identified a network of more than 30 Instagram accounts that collaborate and coordinate overtly racist activity. These accounts promoted white supremacist ideology and often espoused the “great replacement” conspiracy theory that has inspired multiple racial attacks, including... Read more
Source 1Public Announcement
Radicalization/Extremism
X (Twitter)
In a blog post on the 2020 general election, Twitter assured that they do not allow “anyone to use Twitter to manipulate or interfere in elections or other civic processes.”
Source 1However, a report from the research organization RAND found a high prevalence of both troll and super connectors accounts engaging in disinformation about the 2020 election. This network of more than 300,000 suspicious accounts were mostly balanced between the political-left and political-right, suggesting that they were created to stoke domestic... Read more
Source 1Public Announcement
Elections, National Security
X (Twitter)
In March 2020, with the coronavirus spreading rapidly, Twitter claimed it would remove misleading claims about the virus.
Source 1Public Announcement
Trust and Safety
X (Twitter)
In response to congressional investigations about Russian interference into the 2020 election (Twitter sold $275,000 worth of ads to Russia's state-backed RT news agency), Twitter unveiled an "industry-leading transparency center" through which it offered "everyone visibility into who is advertising on Twitter, details behind those ads" and tools through which... Read more
Source 1In 2021, Twitter quietly disabled its Ad Transparency center, claiming it not longer “provides its original intended value.” This was a major blow for public interest researchers.
Source 1Product Launch
Transparency
X (Twitter)
In September 2023, X touted its “ongoing commitment to combat antisemitism” as part of the company's larger commitment to combat “hate, intolerance, and prejudice.”
Source 1The Tech Transparency Project found that white supremacists leveraged conversations on X about the Israel-Hamas conflict to spread antisemitic content such as the Great Replacement theory.
Source 1Public Announcement
Hate Speech
X (Twitter)
A Tech Transparency Project report found multiple X accounts affiliated with Hezbollah, a designated terrorist organization. Not only were these accounts on X but they were also verified.
Source 1Terms and Conditions
Radicalization/Extremism
X (Twitter)
X’s hateful conduct policy disallows attacking other users based on sexual orientation, gender, or gender identity. Their policy on abuse and harassment additionally states that these actions are not allowed on the platform.
Source 1An investigation by the Institute for Strategic Dialogue into online gendered abuse on X found that: • Misogynistic or abusive tweets comprised 10% of the top 100 tweets (by retweets) discussing Liz Cheney and the Jan. 6 hearings. • Tweets about WNBA player Brittney Griner often “misgendered and dehumanized” her... Read more
Source 1Terms and Conditions
Hate Speech
X (Twitter)
In 2017, Twitter testified before the Senate Judiciary Committee that the platform was “actively engage[d] with civil society and journalistic organizations on the issue of misinformation.”
Source 1Under Musk’s ownership, the platform has repeatedly sued nonprofits that conduct misinformation research, claiming that they are meddling to discredit the platform and hurt advertising revenue. This includes a lawsuit against the Center for Countering Digital Hate that was costly and damaging, but ultimately dismissed. In the first paragraph of... Read more
Source 1Congressional Testimony
Trust and Safety
X (Twitter)
Twitter’s hateful conduct policy prohibits targeting users based on protected characteristics such as ethnicity or religious affiliation. In a blog post, Twitter added that it had trained its content moderators on “cultural and historical contextualization of hateful conduct.”
Source 1Twitter has come under fire multiple times for failing to remove posts that are antisemitic and refer to the Holocaust as a "hoax." A report from the Institute for Strategic Dialogue found 19,000 pieces of content on Twitter denying the Holocaust, all created in a two-year timespan from June 2018 to July 2020.
Source 1Terms and Conditions
Hate Speech
X (Twitter)
Per X’s Hateful Conduct Policy, users “may not directly attack other people on the basis of race,...sexual orientation, gender, [or] gender identity."
Source 1The Center for Countering Digital Hate found that daily use of the n-word tripled on X after Musk took over, and the use of slurs against gay men and trans persons rose 58% and 62%, respectively.
Source 1Terms and Conditions
Hate Speech
X (Twitter)
In 2017, in response to congressional concern about Russian interference in the 2020 election, Twitter testified before the House Intelligence Committee that the company was making changes to its platform to stop foreign malign influence operations on its platform.
Source 1Later that year, a report prepared for the Office of Naval Research found that Russian agents used Twitter to inflame domestic divisions in the U.S. and spread divisive content. The accounts engaged in conversations around #BlackLivesMatter, and on presidential candidates Trump and Clinton.
Source 1Congressional Testimony
National Security
X (Twitter)
X’s Help Center claims its “responsibility to reduce the spread of potentially harmful misinformation.”
Source 1Results from a 2023 TrustLab study often identified Twitter in their findings. The study of several European Union countries found: that (1) Twitter had the highest level of discoverability of mis/disinformation among platforms studied, (2) Mis/disinformation on Twitter received the most engagement on the site, and that (3) Twitter had... Read more
Source 1Source 2
Public Announcement
Trust and Safety
X (Twitter)
There have been multiple instances of deepfakes, a type of synthetic media generated by AI, garnering extensive engagement on the platform before they are addressed. Fake images of an explosion from the Pentagon went viral, which coincided with a brief dip in the stock market. This popularity was likely extrapolated... Read more
Source 1Source 2
Source 3
Terms and Conditions
Trust and Safety
X (Twitter)
Musk ensured that his platform would not perpetuate support for fraudulent election claims.
Source 1However, “the 10 most widely shared tweets promoting a “rigged election” narrative in the five days following Trump’s town hall…collectively amassed more than 43,000 retweets.”
Source 1Public Announcement
Elections
X (Twitter)
Analysis from the Institute for Strategic Dialogue (ISD) found that antisemitic content increased on Musk’s Twitter. ISD found more than 325K possibly antisemitic tweets circulated in an eight month period after Musk acquired Twitter. Contrary to Musk’s claim, ISD found an insignificant decrease in the engagement of these tweets.... Read more
Source 1Public Announcement
Hate Speech
X (Twitter)
X’s policy on hateful conduct prohibits spreading harmful stereotypes, inciting harassment, or encouraging discrimination of protected categories, such as religious affiliation.
Source 1It appears the platform's very owner is violating these policies. Since Musk took ownership of the platform, he has been amplifying antisemitic content, echoing the great replacement theory, and garnering the approval of multiple white nationalists.
Source 1Terms and Conditions
Hate Speech
X (Twitter)
In a blog post discussing rule-violating content, the company ensured they “continue to invest heavily in improving both the speed and comprehensiveness of our detections.”
Source 1Mass layoffs at the company’s election integrity teams days before the 2022 midterm elections raised concerns about the company’s ability to spot false narratives harming civic processes.
Source 1Public Announcement
Elections, Trust and Safety
YouTube
Per Google's ad policies, advertisers may not solicit viewers to pay for "official services that are directly available via a government or government delegated provider." Because U.S. voters can easily check their voter status on states' official websites free of charge, soliciting users to pay to check their voter status... Read more
Source 1Tech Transparency Project found a network of ads leading up to the 2022 midterm election misleading users about crucial election information. Some ads solicited users to pay to check their voter status.
Source 1Terms and Conditions
Elections
YouTube
In 2017, YouTube faced signficant public criticism for its failure to moderate harmful content, such as child abuse material. In a blog post, YouTube's CEO claimed the platform was “taking actions to protect advertisers and creators from inappropriate content” by “carefully considering which channels and videos are eligible for advertising.”
Source 1Source 2
A year later, this type fo content was still readily available on the platform. A CNN investigation found that YouTube ran ads of major companies from multiple industries and government agencies on videos promoting white supremacy, pedophilia, and propaganda.
Source 1Public Announcement
Radicalization/Extremism, Kids' Safety
YouTube
YouTube’s policies disallow “content that encourages dangerous or illegal activities that risk serious physical harm or death.”
Source 1Despite this, Tech Transparency Project found 435 videos on YouTube promoting militia activity, some promoting violent tactics. Several videos were affiliated with the Three Percenters, a militia group connected to Jan 6th. Other videos demonstrated militia training exercises with shooting drills.
Source 1Terms and Conditions
Radicalization/Extremism
YouTube
A brief from Google for the Supreme Court promised that “YouTube’s systems are designed to identify and remove prohibited content,” adding, “Since 2019, YouTube’s recommendation algorithms have not displayed borderline videos (like gory horror clips) that even come close to violating YouTube’s policies.”
Source 1A Tech Transparency Project investigation found that YouTube repeatedly recommended content depicting school shootings and serial killers to under-18 engagement accounts.
Source 1Congressional Testimony
Radicalization/Extremism, Kids' Safety
YouTube
Per their firearms policy, “content intended to sell firearms, instruct viewers on how to make firearms, ammunition, and certain accessories, or instruct viewers on how to install those accessories is not allowed on YouTube.”
Source 1A Tech Transparency Project investigation found its 14-year-old engagement account repeatedly exposed to videos about firearms after watching a series of gaming videos. These videos — many of which depicted school shootings scenes from movies or TV, instructional information such on assembling or aiming firearms, and content advertising firearms —... Read more
Source 1Terms and Conditions
Radicalization/Extremism
YouTube
In 2019, Youtube highlighted its multipronged efforts to decrease users’ exposure to harmful content, including limiting recommendations on hateful and supremacist content.
Source 1The Mozilla Foundation crowdsourced analysis of YouTube’s algorithm found that 71% of all videos reported to researchers came from YouTube’s recommendation algorithm. Reported videos were 40% more likely to come from recommendations as opposed to the search feature.
Source 1Public Announcement
Radicalization/Extremism
YouTube
YouTube’s harassment and cyberbullying policy disallows “content that contains prolonged insults or slurs based on someone's intrinsic attributes. These attributes include their protected group status [such as sex or gender, and] physical attributes.”
Source 1Analysts from the Institute for Strategic Dialogue found multiple concerning examples that suggest the prevalence of gendered abuse on the platform. Their findings include: (1) 19 channels (with a combined subscriber count of 390k) dedicated to posting Andrew Tate content, with hundreds of misogynistic comments posted (2) 361 videos about... Read more
Source 1Terms and Conditions
Hate Speech
YouTube
In 2019, YouTube’s hate speech and harassment policy was updated to “specifically [prohibit] videos alleging that a group is superior in order to justify discrimination, segregation or exclusion…[including] videos that promote or glorify Nazi ideology, which is inherently discriminatory."
Source 1When researchers from the Anti-Defamation League searched YouTube for 130 different hate groups and movements, they found that about a third had at least one channel on YouTube. The researchers found a total of 87 violative channels, some of which were more than ten years old.
Source 1Public Announcement
Radicalization/Extremism
YouTube
YouTube's search bar often auto-completes terms when users begin typing. Google clarified that its auto-predictions are not supposed to function for terms that violate their policy, including predictions "associated with the promotion, condoning or incitement of hatred against groups."
Source 1When researchers from the Anti-Defamation League searched YouTube for 130 different hate groups and movements, they found that the prediction feature suggested search terms for 36 of the 130 groups.
Source 1Terms and Conditions
Hate Speech
YouTube
In a 2020 blog post about government-backed disinformation, Google outlined its ongoing efforts regarding coordinated influence operations on its platforms. The company underscored their commitment to "swiftly remove such content from our platforms and terminate these actors’ accounts."
Source 1In 2023, researchers identified a pro-China influence campaign on YouTube with thousands of videos and millions of views. The videos, which amassed more than 100 million views and 700,000 subscribers, sometimes used generative A.I. to push narratives ridiculing the U.S. or praisining China. The researchers at the Australian Strategic Policy... Read more
Source 1Source 2
Public Announcement
National Security
YouTube
A Youtube blog post from 2019 clarified that the platform “will remove content denying that well-documented violent events, like the Holocaust...took place.”
Source 1An Institute for Strategic Dialogue report found 9,500 pieces of content mentioning ‘holohoax’ were created on the platform between 2018 and 2020.
Source 1Public Announcement
Radicalization/Extremism
YouTube
In written testimony, YouTube’s Vice President of Global Affairs claimed the platform “made significant investments over the past few years in policies, technology, and teams that help provide kids and families with the best protections possible.”
Source 1Congressional Testimony
Trust and Safety
YouTube
Media Matters reported multiple monetized high-profile accounts violating these policies. These creators, who collectively have more than 23 million subscribers, all posted videos deadnaming or misgendering individuals that were eligible for monetization. These videos garnered over 15 million views. Ben Shapiro, who has the highest subscriber count in Media Matter's... Read more
Source 1Terms and Conditions
Hate Speech
YouTube
YouTube amended its misinformation policy to address abortions, including banning “content that contradicts local health authorities or WHO guidance” on the safety of “chemical and surgical abortion.”
Source 1The Institute for Strategic Dialogue uncovered multiple videos spreading false information about abortion pill reversals, a widely debunked and at times dangerous procedure.
Source 1Terms and Conditions
Trust and Safety
YouTube
After the 2020 election, YouTube said it would remove content denying the election’s outcome.
Source 1Public Announcement
Elections, Trust and Safety
YouTube
Per Google’s ad policies, ads may not allow dangerous services or a misrepresentation of services.
Source 1The Center for Countering Digital Hate found that 83% of Google searches for abortions yielded ads for abortion reversals, a dangerous procedure. A quarter of ads came from anti-choice organizations falsely advertising as crisis pregnancy centers.
Source 1Terms and Conditions
Trust and Safety
YouTube
Public Announcement
Elections
YouTube
In 2017, YouTube launched Super Chats, a feature that allows viewers to pay for their comments to be featured at the top of chats on live streams. This revenue is shared between the live stream host and YouTube.
Source 1In 2018, Buzzfeed reported on how Super Chats contributed to hateful speech flourishing in the comment sections of these livestreams. In response to the reporting, YouTube said it would review its policies regarding the feature. However, research from the Institute for Strategic Dialogue reveals that Super Chats are still perpetuating... Read more
Source 1Product Launch
Radicalization/Extremism, Elections
TikTok
TikTok’s election integrity policy emphasized its commitment to combating the spread of misinformation on the platform, which includes removing content if it “causes harm to individuals, our community or the larger public.” The policy provides examples of content they would remove, such as “false claims that seek to erode trust... Read more
Source 1Source 2
When an New York University study submitted disinformation ads to TikTok, the platform accepted 90% of them, many of which “contain[ed] the wrong election day, encouraging people to vote twice, dissuading people from voting, and undermining the electoral process.”
Source 1Terms and Conditions
Elections
TikTok
NewsGuard investigated search results for popular news topics, such as Russia and Ukraine, COVID-19 vaccines, and school shootings. Their research yielded misinformed content 20 percent of the time.
Source 1Terms and Conditions
Trust and Safety
TikTok
In response to then-President Trump's executive order sanctioning TikTok in 2020, TikTok promised "that TikTok has never shared user data with the Chinese government, nor censored content at its request."
Source 1However, BuzzFeed’s reporting on leaked audio from internal meetings reveals that “eight different employees describe situations where U.S. employees had to turn to their colleagues in China.” The recording also reveals that “engineers in China had access to U.S. data between September 2021 and January 2022, at the very least.”... Read more
Source 1Congressional Testimony
National Security
TikTok
TikTok’s Community Guidelines do not allow users to "threaten or incite violence, or to promote violent extremism. We do not tolerate discrimination: content that contains hate speech or hateful behavior has no place on TikTok.”
Source 1Media Matters analyzed TikTok’s recommendation system and found that the For You Page recommends extremist content such as QAnon, Patriot Party, and Three Percenters. After these accounts were followed, TikTok would suggest other accounts with similar extremist ideology, such as that of the Oath Keepers. Moverover, an Institute for Strategic... Read more
Source 1Source 2
Terms and Conditions
National Security, Radicalization/Extremism
TikTok
The Tech Transparency Project found that human smugglers often advertised their services on TikTok. Researchers found that searching for "viajes USA" (USA trips) into the platforms search bar yielded dozens of accounts advertising related services.
Source 1Terms and Conditions
National Security, Trust and Safety
TikTok
In response to reports about incel culture on TikTok, a spokesperson said, "hate has no place on TikTok, and we do not tolerate any content or accounts that attack, incite violence against or otherwise dehumanise people on the basis of their gender. We work aggressively to combat hateful behavior by... Read more
Source 1While TikTok bans the search of the term 'incel,' the Global Network on Extremism and Technology (GNET) how the incel community has continued to develop a significant presence on the platform and has easily adapted to avoid content moderation. Today, users employ a variety of tactics to evade moderation, including... Read more
Source 1Public Announcement
Radicalization/Extremism
TikTok
TikTok tweeted that the company “has never been used to 'target' any members of the U.S. government, activists, public figures or journalists.”
Source 1A 2024 Director of National Intelligence report found that "TikTok accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022.” Forbes reported that these accounts stoked bipartisan divides about candidates and called into question policy decisions made by... Read more
Source 1Source 2
Public Announcement
National Security
TikTok
TikTok claims to “label the accounts and videos of media entities that we know to be subject to editorial control or influence by state institutions.”
Source 1The Alliance for Securing Democracy (ASD) found that Russia has been using TikTok to "push its own narrative” in diminishing Western support for Ukraine during the war. ASD’s research also identified 31 news accounts that were Russian-funded but not labeled. Research from Brookings also found that Russian state-affiliated TikTok accounts... Read more
Source 1Source 2
Terms and Conditions
National Security
TikTok
A TikTok spokeswoman promised the platform would “continue to respond to the war in Ukraine with increased safety and security resources to detect emerging threats and remove harmful misinformation.”
Source 1Despite this claim, misinformation about the war in Ukraine has continued to be abundantly available on the platform, perpetuated by 13,000 fake accounts with more than one million combined followers. Videos amplified pro-Russian narratives, falsely posed as news outlets, or alleged falsehoods about corrupt Ukrainian officials.
Source 1Source 2
Public Announcement
National Security
TikTok
A NewsGuard investigation analyzed the For You Pages of nine minors on TikTok and found that all but one of the accounts were exposed to COVID-19 misinformation, with some videos implying that the vaccine kills people.
Source 1Terms and Conditions
Trust and Safety, Kids' Safety
TikTok
TikTok said it would start implementing banners on all videos containing COVID-19 vaccine content to discourage the spread of misinformation.
Source 1When the Institute for Strategic Dialogue analyzed more than 6,000 videos discussing the COVID-19 vaccine, they found that 58% of them lacked banners. Of the videos analyzed containing the hashtag #NoToTheJab, this percentage increased to 76%. Of the analyzed audio containing anti-vaccine misinformation, that percentage increased to 83%.
Source 1Public Announcement
Trust and Safety
TikTok
Speaking about the platform's community guidelines, a TikTok spokesperson highlighted how they specifically call out misogyny as a hateful ideology [and are] crystal clear that this content is not allowed on our platform.” The platform helped misogynistic influencer Andrew Tate gain a following with billions of views, until he was... Read more
Source 1Source 2
Source 3
Despite Tate’s ban, and the ban of influencer Sneako for similar violent and misogynistic behavior, TikTok continues to host content glorifying both figures. Although their accounts are now banned from the app, hashtags referencing them have millions of views, and there are hundreds of fan accounts.
Source 1Public Announcement
Radicalization/Extremism
TikTok
Per TikTok’s community guidelines, “general conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as 'the government; or a 'secret society;” are not eligible for the platforms For You Page. TikTok also claims it will remove conspiracy theories... Read more
Source 1Media Matters made TikTok accounts and interacted with tradwife content, which refers to content that espouses traditional gender roles. Afterwards, their For You Page (TikTok's feed based entirely on algorithmic recommendations) contained multiple far-right conspiracy theories. Among these posts were false claims about the upcoming implementation of martial law and... Read more
Source 1Terms and Conditions
Trust and Safety, Radicalization/Extremism, Hate Speech
TikTok
According to TikTok’s policies, organizations that promote violence on or off the platform are not allowed.
Source 1A Media Matters investigation found that two prominent militia groups — the Three Percenters and American Patriot Women — had an active presence on TikTok. Some of their content was searchable and available on the platform’s For You Page (TikTok's personalized feed based entirely on algorithmic recommendations). Similarly, hashtags relating... Read more
Source 1Source 2
Terms and Conditions
Radicalization/Extremism
TikTok
TikTok's Head of Trust and Safety wrote that "our goal is to identify and remove violative content as swiftly as possible, and ...to help us achieve this, we deploy a combination of automated technology and skilled human moderators who can make contextual decisions on nuanced topics like misinformation, hate speech,... Read more
Source 1Source 2
A Guardian investigation found that moderators are often asked to review content that is not in their language, which begs the question of how effective TikTok’s language moderation is. While there was previously a button for moderators to indicate that a language was not in their language, this option was... Read more
Source 1Source 2
Source 3
Source 4
Public Announcement
Trust and Safety
Meta
Meta's commitments to uphold election integrity don't extend to all of their platforms. As reported by Politico, WhatsApp Channels (a platform that transforms WhatsApp's private messaging into a one-way broadcasting tool) have community guidelines that have limited clarity around election policies and do little to disallow election-related misinformation.
Source 1Terms and Conditions
Trust and Safety
Meta
A report from the Institute for Strategic Dialogue (ISD) identified the Patriots Run Project (PRP), a network of 26 domains, 10 websites, 15 Facebook pages, and 13 linked Facebook groups pushing anti-establishment politicians and falsehoods about election results and elected officials. Although the group claimed to be run by citizens... Read more
Source 1Terms and Conditions
Trust and Safety
TikTok
A Guardian investigation found that moderators are subjected to extreme working conditions. Moderators often felt extremely overwhelmed in their positions and were expected to meet strict productivity standards, with software that tracked their activity and would lock their computers after just five minutes of idleness. According to moderators, their “speed... Read more
Source 1Public Announcement
Trust and Safety
Meta
Meta disallows coordinated inauthentic behavior, which is when actors use a mixture of authentic, fake, and duplicated accounts to deceive others about their identities and spread a message.
Source 1The Tech Transparency Project found multiple Instagram accounts purporting to be legitimate "pharmacies" that were in actuality connecting minors to counterfeit prescription pills.
Source 1Terms and Conditions
Kids' Safety
TikTok
Climate change misinformation that “undermines well-established scientific consensus” is not allowed on TikTok.
Source 1The BBC investigated the enforcement of this policy and found that TikTok failed to removed 95% of the content they flagged as containing climate misinformation. These posts collectively garnered almost 30 million views. Another report by Media Matters found that Spanish language climate misinformation was largely unmoderated on the platform,... Read more
Source 1Source 2
Public Announcement
Radicalization/Extremism
Meta
Meta claims that its automated tools for detecting and removing harmful content are highly effective. The company’s 2023 Community Standards Enforcement report determined that Meta’s proactive detection technology removed 87.8% of bullying and harassment content, 99% of child exploitation content, and 95% of hate speech before users reported it. Meta... Read more
Source 1Source 2
Source 3
In 2023, Meta whistleblower Arturo Béjar revealed that Meta’s reported figures apply only to the content that the company ultimately removes, which is very different from the totality of violative content. This is a major sleight of hand. Furthermore, to grade its own homework, Meta used a measurement called prevalence:... Read more
Source 1Source 2
Terms and Conditions
Kids' Safety
Meta
Despite Facebook’s promises, a flaw in Messenger Kids allowed thousands of children to be in group chats with users who hadn’t been approved by their parents. Facebook tried to quietly address the problem by closing violent group chats and notifying individual parents. The problems with Messenger Kids were only made... Read more
Source 1Public Announcement
Kids' Safety
Meta
In order to comply with the Children's Online Privacy Protection Act, Meta’s own Codes of Conduct prohibit users under the age of 13 from signing up for an Instagram or Facebook account. In his 2021 testimony before the Senate Commerce Committee, Instagram head Adam Mosseri reiterated, “If a child is... Read more
Source 1Source 2
According to the unsealed legal complaint brought by 33 state attorneys general against Meta, the company has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 yet it “disabled only a fraction” of those accounts. Instagram, in particular, actively... Read more
Source 1Source 2
Source 3
Source 4
Testimony
Kids' Safety
Meta
In a 2021 blog post, CEO Mark Zuckerberg rebuked claims that Meta was operating in secrecy by saying that the company had “established an industry-leading standard for transparency and reporting.”
Source 1Facebook did operate Crowdtangle, a leading data analytics and social monitoring tool that allowed academics, watchdog organizations, and journalists to identify harmful content on the platform, including CSAM. But in 2024, Facebook shut down Crowdtangle. It did so by quietly assigning or removing team members, including the tool’s former CEO... Read more
Source 1Source 2
Source 3
Source 4
Public Announcement
Kids' Safety
Meta
In response to a 2023 report by the Guardian, a Meta spokesperson said, “The exploitation of children is a horrific crime – we don't allow it and we work aggressively to fight it on and off our platforms.”
Source 1In 2024, the Wall Street Journal Reported that two internal teams raised concerns that Meta’s subscriber feature sold exclusive content from child influence "to an audience that was overwhelmingly male and often overt about sexual interest." The Guardian’s 2023 investigation confirmed that Facebook and Instagram were still operating as major... Read more
Source 1Source 2
Public Announcement
Kids' Safety
Meta
A spokesperson for Meta, which owns Instagram, said that keeping young people safe was the company's top priority. "We use advanced technology and work closely with the police and CEOP [Child Exploitation and Online Protection] to aggressively fight this type of content and protect young people." A year later, Facebook’s... Read more
Source 1Source 2
In 2019, the National Society for the Prevention of Cruelty to Children found that Instagram was the #1 platform for child grooming in the UK; they identified more than 5,000 crimes of sexual communication with children and a 200% increase in how Instagram was used to abuse children, all in... Read more
Source 1Source 2
Source 3
Public Announcement
Kids' Safety
Meta
Meta explicitly prohibits material that sexually exploits or endangers children, including any transactions or content that involves trafficking, coercion, sexually explicit language, and non-consensual acts.
Source 1According to investigations by the Wall Street Journal and researchers at Stanford University and the University of Massachusetts Amherst, Instagram’s recommendation system and hashtags help promote a vast network of pedophiles and guide them to content sellers. A 2022 study by the National Center On Sexual Exploitation found that 22%... Read more
Source 1Source 2
Source 3
Source 4
Source 5
Terms and Conditions
Kids' Safety
Meta
Meta has long purported to value the mental health of its young users, including and especially teenage girls. In a 2021 blog post, Zuckerberg wrote that in “serious areas like loneliness, anxiety, sadness, and eating issues -- more teenage girls who said they struggled with that issue also said Instagram... Read more
Source 1Zuckerberg wrote this despite the fact that internal presentations from March 2020 found that Instagram caused negative body image perceptions for a third of girls, as reported by the Wall Street Journal. Internal researchers at Meta warned that Instagram’s monetization of “face and body,” the pressure to look a certain... Read more
Source 1Public Announcement
Kids' Safety
Meta
Meta has clearly stated that it removes content that depicts or encourages suicide or self-injury, including graphic imagery and real-time depictions. This includes promising to place a sensitivity screen over content that doesn't violate its policies but may still be upsetting to some users.
Source 1Instagram’s internal research found that “13% of UK teenagers and 6% of US users” traced a desire to kill themselves back to Instagram. The BBC found that Instagram “removed almost 80% less graphic content about suicide and self-harm” during the height of the COVID-19 pandemic. Despite these findings in 2020... Read more
Source 1Source 2
Source 3
Terms and Conditions
Kids' Safety
Meta
Amid criticisms of its platforms, Meta has rolled out some 30 parental controls to manage who their kids can talk to or how much time they spend on Facebook and Instagram.
Source 1Product Launch
Kids' Safety
Meta
Unredacted documents in New Mexico’s lawsuit against Meta show that a 2021 internal Meta estimate found as many as 100,000 children every day received sexual harassment. This finding came as the company “dragged its feet” on implementing new safeguards for minors and showed a “historical reluctance” to keep children safe,... Read more
Source 1Terms and Conditions
Kids' Safety
Meta
After 2019, internal Meta documents show that the company added steps to the reporting process to discourage users from filing reports. And while users could still flag things that upset them, Meta shifted resources away from reviewing them. Meta said the changes were meant to discourage frivolous reports and educate... Read more
Source 1Public Announcement
Kids' Safety
Snapchat
In 2017, Snapchat launched Snap Map, a feature that allows users to share their current location and see where their friends are. Snap Map was supposed to help bridge the gap between social media and the real world by bringing users together in person.
Source 1When launched, Snap Map displayed users’ location automatically. To disable Snap Map from revealing their location, users had to go into their settings to “ghost” themselves so their friends cannot see their location. This feature has exposed minors to severe (and predictable) harms, including stalking, predation, and sexual assault. In... Read more
Source 1Source 2
Source 3
Product Launch
Kids' Safety
Snapchat
Snapchat claims it is committed to fighting the national fentanyl poisoning crisis. That means using “cutting-edge technology” to help proactively find and remove drug content and accounts, as well as working with law enforcement and other groups to raise awareness of drug issues, fentanyl, and counterfeit pills.
Source 1Over 60 family members of children who obtained illegal drugs through Snapchat are suing the company. In all but two cases, the child died after ingesting the drugs, many of which were laced with fentanyl. Some of Snapchat's features that set it apart from other apps — such as automatically... Read more
Source 1Source 2
Public Announcement
Kids' Safety
Snapchat
In 2013, Snapchat introduced the “Speed Filter,” which let users capture how fast they are moving and share it with friends. Snap says it is “deeply committed to the safety and well-being of our community, and our teams, products, policies, and partnerships apply safety by design principles to keep Snapchatters... Read more
Source 1The filter was connected to several deadly car crashes, including a 2017 case where three men — two 17-year-olds and a 20-year-old — died when a car crashed into a tree. "One Snap captured the boys' speed at 123 mph," according to court documents, as covered by the BBC. Even... Read more
Source 1Source 2
Source 3
Product Launch
Kids' Safety
Snapchat
Snapchat strictly prohibits bullying and harassment of any kind, and explicitly names these as values in the company’s guidelines.
Source 1Despite its proclamations, Snapchat facilitated applications like Yolo and LMK that allowed users to hide their identities. These apps — which are integrated into the Snapchat messaging platform through Snap Kit (the company’s suite of tools for third-party developers), have greatly contributed to bullying. In 2020, cyberbullying facilitated by these... Read more
Source 1Source 2
Source 3
Terms and Conditions
Kids' Safety
Snapchat
In order to comply with the Children's Online Privacy Protection Act, Snapchat explicitly prohibits users under the age of 13.
Source 1Research suggests that Snapchat is the most popular app for underage users. According to a study from Harvard’s T.H. Chan School of Public Health, Snapchat has nearly 3 million users under the age of 13. Underage users account for an estimated 13% of the platform’s usage. British regulators also found... Read more
Source 1Source 2
Terms and Conditions
Kids' Safety
Snapchat
Snap claims to be “deeply committed to the safety and wellbeing of its community,” including employing a number of wellbeing features to “educate and empower members of the Snapchat community to support friends who might be struggling with their own social and emotional wellbeing.”
Source 1Snapchat’s claims fly in the face of its very design. Popular filters automatically enlarge users’ eyes, lift their cheekbones, and lighten their skin. In-app additions like FaceTune allow users to perfect their features and share an unrealistic version of themselves with others. Snapchat and other image-based platforms make users desperate... Read more
Source 1Source 2
Source 3
Source 4
Public Announcement
Kids' Safety
Snapchat
Snap claims to “prohibit any activity that involves sexual exploitation or abuse of a minor, including sharing child sexual exploitation or abuse imagery, grooming, or sexual extortion (sextortion), or the sexualization of children.” The company says that it reports all identified instances of child sexual exploitation to authorities, including attempts... Read more
Source 1Source 2
The National Society for the Prevention of Cruelty to Children, a British child protection nonprofit, found that Snapchat is the site most used to share child abuse images, being used in 43% of cases where a social media site was flagged.
Source 1Terms and Conditions
Kids' Safety
TikTok
In theory, these accounts are designed to protect the privacy of users. But in reality, they have often served as hubs for CSAM material. A 2022 Forbes investigation found that TikTok’s “private” accounts are serving as portals for CSAM and trafficking of underage users. The content is posted in private... Read more
Source 1Source 2
Terms and Conditions
Kids' Safety
TikTok
TikTok’s own internal research shows that a third of its U.S. users are under 14. Researchers at Harvard’s T.H. Chan School of Public Health estimate that TikTok has more than 3 million users under the age of 13, and that these underage users account for 64% of the platform’s usage.... Read more
Source 1Source 2
Source 3
Terms and Conditions
Kids' Safety
TikTok
Ahead of a congressional hearing before the House Committee on Energy and Commerce where its CEO, Shou Zi Chew, was set to testify, TikTok announced a 60-minute watch limit for teen users. The limit will automatically alert users who are registered as under 18 once they've hit the one-hour mark... Read more
Source 1The time limit TikTok designed for teens is more for show — it doesn't prevent younger users from watching TikTok. In reality, teens spend around 1.5 hours on the app. At the same time, Douyin, the Chinese version of TikTok, has multiple measures that actually limit teens’ usage of the... Read more
Source 1Source 2
Source 3
Product Launch
Kids' Safety
TikTok
TikTok says it has a "zero tolerance" policy against child predators and grooming behaviors. That includes using automatic detection tools to prevent communications between minors and adults, and not allowing an account to receive or send direct messages if the user registers themselves as being under 16.
Source 1A 2020 investigation by the BBC found that TikTok allowed direct messages from older men to accounts of young female users who were clearly labeled as underage. The company also failed to remove the accounts of the men who continued to message the underage user, even after she told them... Read more
Source 1Source 2
Source 3
Public Announcement
Kids' Safety
TikTok
TikTok announced enhanced "Family Pairing" features, which would give parents greater control over their children's TikTok usage and Direct Messages.
Source 1Ireland’s Data Protection Commission found that this feature “failed to verify whether the user was actually the child user’s parent or guardian.” Instead, it allowed any adult to pair up with users under the age of 16, presenting obvious potential risks for children.
Source 1Product Launch
Kids' Safety
X (Twitter)
Elon Musk announced in 2022 that X had significantly increased its efforts to combat child sexual exploitation on its platform, and said addressing child sexual exploitation content on the social media platform was “Priority #1.”
Source 1Despite Musk’s claims that child safety was his number one priority for the platform, Australia’s eSafety Commissioner issued a report showing that, in the three months after Musk took ownership of the company, the platform’s automatic detection of child abuse material fell from 90% to 75%. While Musk claimed to... Read more
Source 1Source 2
Public Announcement
Kids' Safety
X (Twitter)
In a report for Australia’s eSafety Commissioner, Twitter claimed to proactively prevent CSAM through various tools. These tools supposedly detect CSAM imagery and videos in tweets and DMs and block URLs linking to known CSAM material in both public tweets and direct messages.
Source 1Australia's eSafety Commissioner report exposed the various instances in which the company lacks essential tools to prevent CSAM. Twitter stated to the Commissioner that it "is not a service used by large numbers of young people," but recognizes "that we need policies to protect against this." The company also admitted... Read more
Source 1Terms and Conditions
Kids' Safety
X (Twitter)
Researchers at the Stanford Internet Observatory found that Twitter failed to take down dozens of images of child sex abuse. They identified 128 Twitter accounts selling child sex abuse material and 43 instances of known CSAM. Forbes found that illegal material remains alarmingly easy to find on Twitter, in multiple... Read more
Source 1Source 2
Source 3
Source 4
Terms and Conditions
Kids' Safety
X (Twitter)
X launched its new ID verification feature for X Premium subscribers, allowing paying users to confirm their identity through government-issued ID. This verification process purported to involve a matching system that uses both the user's license (or equivalent) and a selfie taken during the confirmation steps. The initiative aims to... Read more
Source 1Source 2
Despite the claims of "verification," users can obtain a verified account in the app by providing only a phone number and a bank account, without the need for ID confirmation. In addition, X makes claims about reducing impersonation and spam but doesn't offer verification tools to all users. X’s policy... Read more
Source 1Source 2
Source 3
Public Announcement
Kids' Safety
X (Twitter)
Twitter’s Application Programming Interface was once one of the internet’s leading research tools. A bedrock of the company’s transparency measures, free API access empowered critical research into topics such as democracy, child safety, public health, national security, mental health, crisis responses, and more. Free API access previously allowed researchers to... Read more
Source 1Source 2
Source 3
Source 4
Source 5
In May 2023, Twitter changed its policies and skyrocketed the cost for API access to $42,000 a month or more for an enterprise account. For many researchers and academic institutions, this cost proved too high. According to the Coalition for Independent Technology Research, ending free access to the API jeopardized... Read more
Source 1Source 2
Source 3
Source 4
Product Launch
Kids' Safety
X (Twitter)
In December of 2022, Musk removed Twitter’s suicide-prevention hotline feature. After publication of the story, Musk then denied the feature had been removed, but Twitter’s head of trust and safety, Ella Irwin, confirmed the removal but said it was temporary. The feature was later readded. In December 2022, Business Insider... Read more
Source 1Source 2
Source 3
Terms and Conditions
Kids' Safety
Meta
Despite touting preventative measures to detect and prevent sextortion, in July 2024, Meta removed 63,000 accounts believed to be linked to Nigerian sextortion scammers. Additionally, Meta removed Facebook accounts, pages, and groups that discussed how to successfully blackmail victims. According to the FBI "sextoriton is one of the fastest growing... Read more
Source 1Terms and Conditions
Kids' Safety
X (Twitter)
In 2016, Twitter launched its Trust and Safety Council, an advisory group of around 100 independent safety advocates, academics, and researchers who would play a “foundational part” in ensuring safety and integrity on the platform, including addressing issues like hate speech, child exploitation, suicide, and self-harm. Upon taking control of... Read more
Source 1Source 2
Source 3
Musk later “changed his mind” about the formation of a new content moderation council. Former council members soon became the target of online attacks, in part spurred on by Musk’s criticism. Yoel Roth, the former head of Trust and Safety at Twitter, received online threats that Musk amplified, including the... Read more
Source 1Source 2
Public Announcement
Kids' Safety
Meta
In March 2024, Meta's Vice President of Global Affairs, Nick Clegg, emphasized the company's commitment to fighting the opioid crisis. On X, he pledged that Meta would "help disrupt the sale of synthetic drugs online" and "educate users about the risks." According to Meta's Community Standards, "high-risk drugs" that have... Read more
Source 1Source 2
The Tech Transparency Project (TTP) found that on that same day as Clegg's promise, ads selling prescription opioids ran on Instagram, Facebook, Messenger, and Meta's Audience Network, which runs ads on partner apps. In total, TTP found 452 high-risk drug ads running on these platforms, with TTP caveating that this... Read more
Source 1Public Announcement
Trust and Safety
X (Twitter)
In 2022, Twitter signed the EU's Code of Practice on Disinformation, a voluntary set of commitments made by platforms and fact-checkers to decrease the spread of misinformation on social media.
Source 1In May 2023, Twitter withdrew from the agreement. The company withdrew from this commitment despite having to comply with similar commitments when the EU's Digital Services Act went into effect in August 2023. Later in the year, studies showed that posts containing misinformation were more prevalent and discoverable on X... Read more
Source 1Terms and Conditions
Trust and Safety
X (Twitter)
Since April 2023, synthetic media has violated X's policies, with the exception of satirical posts with proper disclaimers. The platform also provides a mechanism called Community Notes, which allows users to add context to "potentially misleading posts" by adding labels or disclaimers to tweets. Only when Community Notes are approved... Read more
Source 1Source 2
Elon Musk reposted an AI-generated video to his personal X account which altered an existing video from the Harris campaign in which the Vice President appeared to call herself unqualified and the "ultimate diversity hire" while suggesting President Biden was senile. Musk posted the video to his 191 million followers... Read more
Source 1Terms and Conditions
Trust and Safety
X (Twitter)
In 2017, in response to congressional concern about Russian interference in the 2020 election, Twitter testified before the House Intelligence Committee that the company was making changes to its platform to stop foreign malign influence operations on its platform.
Source 1A Department of Justice investigation found that employees of RT — a Russian state-controlled media outlet — has paid $10 million to spread nearly 2,000 English-language videos on YouTube, TikTok, Instagram and X. The videos, most of which support the goals of the Russian government, have gained 16 million views on YouTube.
Source 1Congressional Testimony
National Security
TikTok
TikTok promises to label accounts run by entities whose editorial output or decision-making process is subject to control, influenced, or dependent on a government. They apply additional scrutiny to entities that may be heavily reliant on state funding, either directly or through advertisements.
Source 1A Department of Justice investigation found that employees of RT—a Russian state-controlled media outlet—has paid $10 million to spread nearly 2,000 English-language videos on YouTube, TikTok, Instagram and X. The videos, most of which support the goals of the Russian government, have gained 16 million views on YouTube.
Source 1Public Announcement
National Security
Meta
Instagram claims that it uses moderation technology to find further instances of inauthentic content and disinformation and apply labels to reduce its spread. The label will link out to the rating from the fact-checker and provide links to articles from credible sources that debunk the claim(s) made in the post.... Read more
Source 1A Department of Justice investigation found that employees of RT — a Russian state-controlled media outlet — has paid $10 million to spread nearly 2,000 English-language videos on YouTube, TikTok, Instagram and X. The videos, most of which support the goals of the Russian government, have gained 16 million views on YouTube.
Source 1Public Announcement
National Security
YouTube
YouTube promises that it works to identify coordinated influence operations on their platforms and swiftly remove such content from. They take steps to prevent possible future attempts by the same actors, and routinely exchange information and share our findings with others in the industry.
Source 1A Department of Justice investigation found that employees of RT — a Russian state-controlled media outlet — has paid $10 million to spread nearly 2,000 English-language videos on YouTube, TikTok, Instagram and X. The videos, most of which support the goals of the Russian government, have gained 16 million views on YouTube.
Source 1Public Announcement
National Security
Meta
Meta promises that, in line with their commitment to authenticity, they don't allow people to misrepresent themselves on Facebook. This includes using fake accounts, artificially boosting the popularity of content, or engaging in behaviors designed to enable other violations under their Community Standards.
Source 1Doppelganger, a targeted Russian disinformation campaign, created “inauthentic pages” on Meta and X where they distributed content in various European languages, including English, German, French, Italian, Polish, and Ukrainian, between June 4 to 28, 2024. The over 1,300 pro-Russian posts included content that criticized government support for Ukraine, exploited divisive... Read more
Source 1Terms and Conditions
National Security
X (Twitter)
X claims that it does not allow coordinated activity that attempts to artificially influence conversations through the use of multiple accounts, fake accounts, automation and/or scripting.
Source 1Doppelganger, a targeted Russian disinformation campaign, created “inauthentic pages” on Meta and X where they distributed content in various European languages, including English, German, French, Italian, Polish and Ukrainian, between June 4 to 28, 2024. The over 1,300 pro-Russian posts included contented that criticized government support for Ukraine, exploited divisive... Read more
Source 1Terms and Conditions
National Security
Meta
Meta claims that it does not allow illegal, unacceptable or objectionable content on its platforms. This may include advertisements that contain content that promotes the sale of drugs, child sexual explicit material, or financial scams.
Source 1Cybersecurity for Democracy found that 64% of Telegram-linked ads on Facebook appeared to have violated Meta’s policies, including violating Meta's advertising policies, selling drugs, and potentially promoting CSAM. Only two of the most recent 50 ads reviewed had been removed by Meta at the time they did this search.
Source 1Terms and Conditions
Trust and Safety
X (Twitter)
X's Grok AI has terms of services that require users to not use the chatbot for any purpose that "(ii) is fraudulent, false, deceptive, or defamatory, (iii) promotes hatred, violence, or harm against any individual or group, or (iv) otherwise may be harmful or objectionable (in our sole discretion) to... Read more
Source 1In September 2024, Al Jazeera was able to make lifelike images that show Texas Republican Senator Ted Cruz snorting cocaine, Vice President Kamala Harris brandishing a knife at a grocery store, and former President Donald Trump shaking hands with white nationalists on the White House lawn using X's Grok AI.
Source 1Terms and Conditions
Elections
Meta
Meta’s policy on dangerous individuals and organizations does not allow hate organizations or organizations that may intend to coerce civilians or the government.
Source 1Research from the Tech Transparency Project uncovered a network of 262 Facebook groups (both public and private) and 193 Facebook pages for militia and anti-government activists that were created since January 6, 2021. Nearly two dozen of those groups and pages have been created since May, 2024 according to the... Read more
Source 1Terms and Conditions
Radicalization/Extremism
Meta
Meta claims that they are "focused on providing reliable election information while combating misinformation across languages."
Source 1Facebook amplified election-related disinformation spread in Durham County, North Carolina, in August 2024. Posts seeming to come from an authoritative source falsely claimed that voters must request new ballots if a poll worker, or anyone else, writes on their form, because it would be invalidated. The same incorrect message was... Read more
Source 1Source 2
Terms and Conditions
Elections
Meta
Meta claims that they "are focused on providing reliable election information while combating misinformation across languages."
Source 1Ahead of the 2024 US election, Meta began limiting the reach of “political” content on Instagram (loosely defined as "potentially related to things like laws, elections, or social topics"). This policy has yielded an average 65% drop in the reach of several prominent accounts that regularly posted credible political content,... Read more
Source 1Terms and Conditions
Elections
TikTok
TikTok claims they "do not allow ads featuring political content across any of our monetization features, including paid ads, creators being paid to make branded political content, and other promotional tools on the platform. We also impose prohibitions at the account level for advertisers we identify as politicians and political... Read more
Source 1An investigation by the international NGO, Global Witness, found that TikTok approved 50% of ads containing false information about the election, despite its policy explicitly banning all political ads. The organization tested TikTok's policies by submitting ads that were not labelled as being political in nature, but included verifiably false... Read more
Source 1Public Announcement
Elections
TikTok
TikTok claims that they "do not allow account behavior that may spam or mislead our community. This includes conducting covert influence operations, manipulating engagement signals to amplify the reach of certain content, and operating spam or impersonation accounts."
Source 1The Wall Street Journal found thousands of videos spreading false information coming from 91 coordinated accounts separately located in China, Nigeria, Iran and Vietnam. Over one month, the accounts spammed TikTok users with more than 3,000 new videos, many spreading false information about former President Trump. The posts peaked at... Read more
Source 1Public Announcement
Elections
X (Twitter)
X claims that it restricts the reach of “verifiably false or misleading information about the circumstances surrounding a civic process intended to intimidate or dissuade people from participating in an election or other civic process.“
Source 1A network of accounts on X claims to be foreign nationals who have illegally voted in the U.S. presidential election, according to new research from the nonprofit Institute for Strategic Dialogue. The accounts reinforced baseless narratives about noncitizen voting that have been spreading rapidly in this election cycle, despite having... Read more
Source 1Source 2
Terms and Conditions
Elections
Meta
Meta's Instagram, Facebook, and Threads all have Terms and Conditions that prohibit voter interference, threats of violence, and election misinformation.
Source 1Ads launched by a pro-Trump organization called Progress 2028 spread disinformation about presidential candidate Kamala Harris. The ads were designed to look like they support the Harris campaign but tout controversial policy stances she doesn't endorse. They include ensuring undocumented immigrants can vote and receive Medicare benefits, instituting mandatory gun... Read more
Source 1Terms and Conditions
Transparency
X (Twitter)
X claims that users "may not use X’s services for the purpose of manipulating or interfering in elections or other civic processes, such as posting or sharing content that may suppress participation, mislead people about when, where, or how to participate in a civic process, or lead to offline violence... Read more
Source 1The BBC identified networks of dozens of accounts that re-share each other's content multiple times a day — including a mix of true, unfounded, false, and deepfaked material — to boost their reach and, therefore, revenue on the site. Among the misleading posts shared by this network were claims about... Read more
Source 1Terms and Conditions
Transparency
X (Twitter)
X claims that it restricts the reach of “verifiably false or misleading information about the circumstances surrounding a civic process intended to intimidate or dissuade people from participating in an election or other civic process.“
Source 1X's “explore” section uses Grok AI software to aggregate trending social media topics. The information is not fact-checked by humans, and in several recent examples it seemed to repeat false or unsubstantiated claims as if they were true. In one case, Grok parroted unfounded claims of wrongdoing in Maricopa County,... Read more
Source 1Terms and Conditions
Elections
Meta
Meta's Instagram, Facebook, and Threads all have Terms and Conditions that prohibit voter interference, threats of violence, and election misinformation.
Source 1The Institute for Strategic Dialogue (ISD) found that Facebook, X, YouTube, and TikTok consistently failed to enforce their community guidelines related to election integrity on livestreams. ISD studied 26 election-related livestreams on the platforms and found that 15 included instances of election and civic integrity policy violations. Eight videos featured... Read more
Source 1Terms and Conditions
Elections
YouTube
According to YouTube, prohibited content includes posts that encourage voter suppression, tell viewers they can vote through inaccurate methods like texting their vote to a particular number, give made up voter eligibility requirements like saying that a particular election is only open to voters over 50 years old, tell viewers... Read more
Source 1The Institute for Strategic Dialogue (ISD) found that Facebook, X, YouTube, and TikTok consistently failed to enforce their community guidelines related to election integrity on livestreams. ISD studied 26 election-related livestreams on the platforms and found that 15 included instances of election and civic integrity policy violations. Eight videos featured... Read more
Source 1Terms and Conditions
Elections
TikTok
TikTok’s guidelines on Civic and Election Integrity "prohibit misinformation that may 'disrupt the peaceful transfer of power or lead to off-platform violence.'”
Source 1The Institute for Strategic Dialogue (ISD) found that, when it came to livestreams, TikTok consistently failed to enforce their community guidelines. ISD found TikTok livestreams included false, misleading, or unfounded claims about election integrity. Among them were numerous comments and claims targeted at Vice President Harris, which stated that the... Read more
Source 1Terms and Conditions
Elections
X (Twitter)
Under its Violent Content policy, X does not allow content that includes violent threats, including damage to “infrastructure that is essential to daily, civic, religious, or business activities” and also “wishing, hoping or expressing desire for harm” against other groups. Likewise, the company states that it is "committed to combating... Read more
Source 1Source 2