Project

Big Tech’s Broken Promises


A project of Issue One, the Big Tech’s Broken Promises tracker catalogs the history of public proclamations and policy changes announced by the largest technology companies that purported to protect users, prioritize vulnerable communities, or safeguard the broader information ecosystem in which democracies operate. Each of these changes was announced publicly, only to be later retracted, significantly altered, marginalized, or never come to fruition. Many are half-truths or deflections that hide a different reality. And while these broken promises are not a reflection on the employees of these platforms — who work hard to build safe and healthy systems — they are illustrative of the broader incentives driving these corporations.

We hope this tracker will inform lawmakers, advocates, researchers, and platform users as they seek to apply new oversight measures to these companies. Thank you to the organizations whose diligent, thoughtful work contributed to this tracker, including but not limited to Accountable Tech, the Anti-Defamation League, Center for Countering Digital Hate, Institute for Strategic Dialogue, and Tech Transparency Project.

Check out the tracker’s data dictionary for explanations of categories, dates, and types of announcements.

Questions? Contact us at techreform@issueone.org.

Promise Date Original Order
Reality Date Original Order
Meta
Facebook uses a product called XCheck, which is supposed to apply additional oversight to accounts with large followings, such as politicians.
Source 1
11.06.2020
However, internal research found that XCheck had evolved into a “white list” that exempted these types of accounts from company policy, meaning that 'VIP' accounts were able to harass, incite violence, and violate company policy without consequence. This company analysis went so far to call the program a “breach of... Read more
Source 1
Source 2
09.13.2021
Product Launch
Trust and Safety
Meta
A blog post on Meta’s efforts to protect the 2020 elections emphasizes the company’s “responsibility to stop abuse and election interference on our platform.” Since 2018, Facebook’s election policies have prohibited “threats of violence relating to voting, voter registration or the outcome of an election.”
Source 1
11.01.2018
However, an unpublished document from the House Select Committee on the January 6 Attack found that in the weeks after the election, hundreds of Facebook groups coordinated efforts to “stop the steal,” often calling for violence within the groups.
Source 1
01.06.2021
Public Announcement
Elections, Radicalization/Extremism
Meta
According to their community standards, Meta removes “ideologies that promote hate, such as Nazism and white supremacy.”
Source 1
02.28.2020
Not only were multiple Nazi and white supremacist accounts active on Facebook and Instagram, but they were successfully soliciting donations from these accounts.
Source 1
11.30.2023
Terms and Conditions
Radicalization/Extremism
Meta
Meta’s policies ban terrorist organizations from the platform such as those designated Foreign Terrorist Organizations (FTOs) by the U.S. government. In a blog post, the company identified ISIS and Al-Qaeda as two groups posing the “broadest global threat.”
Source 1
Source 2
12.29.2018
When someone enters their job, interests, or location on their Facebook profile, Facebook automatically creates a linked page for them if there is not one already. A whistleblower petition flagged that not only did Facebook fail to ban designated terrorist groups, it auto-generated hundreds of pages for groups such as... Read more
Source 1
Source 2
04.29.2019
Terms and Conditions
Radicalization/Extremism
Meta
Meta has multiple community standards limiting spam, fraud and deception, and misrepresentations.
Source 1
Source 2
Source 3
10.31.2019
Despite this, multiple scams targeted migrants through Facebook and WhatsApp, according to the Tech Transparency Project. Multiple accounts across the U.S. and Canada purporting to be immigration experts promoted a visa scam that took users to sites collecting personal information.
Source 1
10.14.2022
Terms and Conditions
Trust and Safety
Meta
Facebook’s policy on human exploitation disallows content that “offers or assists in the smuggling of humans”
Source 1
09.04.2019
A Tech Transparency Project investigation found active buy-sell groups on Facebook where coyotes offered a menu of services, including helping people cross the border. TTP found WhatsApp was also a frequent tool used by human smugglers to communicate with migrants.
Source 1
09.15.2022
Terms and Conditions
National Security
Meta
Facebook launched a new feature, Group Experts, which allows administrators of Facebook Groups to distinguish members with authority/credibility.
Source 1
04.08.2021
Facebook’s ‘Group Expert’ feature has been given to multiple anti vaxxers who spread disinformation about the COVID-19 pandemic.
Source 1
08.17.2021
Product Launch
Trust and Safety
Meta
Meta’s policy on dangerous individuals and organizations does not allow hate organizations or organizations that may intend to coerce civilians or the government. According to internal documents leaked by the Intercept, organizations like the Three Percenters are banned on the platform.
Source 1
Source 2
12.18.2019
An investigative piece in WIRED found that militia extremists such as the Three Percenters have been organizing through Facebook groups, using it to recruit members and coordinate anti-government activities across the country. The article counted more than 200 extremist groups, some with thousands of members.
Source 1
05.02.2024
Terms and Conditions
Radicalization/Extremism
Meta
Meta promised to “[label] state-controlled media on their Page and on [their] Ad Library” as part of their efforts to protect the 2020 elections.
Source 1
09.05.2018
Meta has not been able to keep up with the scale of foreign influence operations. Research from the Center for Countering Digital Hate finds that “the vast majority (91%) of posts containing content from Russian state media are not covered by this policy and do not carry labels.”
Source 1
02.06.2022
Public Announcement
National Security
Meta
In 2016, Facebook acquired CrowdTangle to help researchers study the platform, including providing crucial real-time content analysis. A Facebook blog post from 2020 underscored the importance of this tool. “Supporting independent research through data access, training, and resources is critical to understanding the spread of public content across social media... Read more
Source 1
11.28.2023
In March 2024, Meta announced that it would be phasing out CrowdTangle (effective August 2024) and replacing it with a more restricting and limited tool. This decision, which comes in a year in which nearly have the world's population will vote in at least 64 elections across the world, can... Read more
Source 1
Source 2
Source 3
Source 4
03.14.2024
Product Launch
Elections, Transparency
Meta
Meta published a one-pager on their plans to help combat election and voter interference in the 2022 midterm elections. Among their listed measures were improving researcher access and increasing transparency about political advertising.
Source 1
08.05.2022
However, the AP outlined multiple actions that would deter these outlined goals. The website CrowdTangle, which provides third-party entities such as researchers and journalists to analyze and fact-check Facebook posts, has sometimes been inoperable.
Source 1
08.05.2022
Public Announcement
Elections, Transparency
Meta
In August 2020, Meta said they would take action against “accounts tied to offline anarchist groups that support violent acts, including US-based militia organizations and QAnon.”
Source 1
08.19.2020
Facebook’s ‘Group Expert’ feature has been given to multiple anti-vaxxers who spread disinformation about the COVID-19 pandemic.
Source 1
Source 2
09.16.2022
Public Announcement
Hate Speech
Meta
Meta claims that it clearly labels all election-related and issue ads on Facebook and Instagram in the US, including by putting a "paid for by" disclosure from the advertiser at the top of the ad.
Source 1
05.24.2018
There have been multiple reports uncovering Meta's failure to follow through on this policy. Advertisers are allowed to change the “paid for by'' field after an initial identity verification. This loophole has created many cases where Facebook was aware of the advertiser’s identity, but users were not. Meta also routinely... Read more
Source 1
Source 2
11.04.2018
Public Announcement
Elections
Meta
Between 2018 and 2020, Facebook published at least 15 blog posts highlighting their efforts to remove coordinated inauthentic behavior from Iranian state-backed actors.
Source 1
09.19.2018
In 2020, Facebook removed fake accounts spreading Iranian messaging that had been operating since 2011, bringing into question just how effective previous takedowns were.
Source 1
05.05.2020
Public Announcement
National Security
Meta
Facebook’s then-COO Sheryl Sandberg testified before the Senate Intelligence Committee that the company was “investing heavily in people and technology to keep our community safe and keep our service secure.”
Source 1
09.05.2018
Many of these investments were cut in 2023, when much of the tech sector downsized key integrity or content moderation teams. Reports of substantial layoffs impacted key departments, such as those that address misinformation and trust and safety.
Source 1
05.26.2023
Congressional Testimony
Trust and Safety
Meta
Meta CEO Mark Zuckerberg’s written testimony to the House Energy & Committee Committee claims that Facebook “remove[s] language that incites or facilitates violence, and [bans] Groups that proclaim a hateful and violent mission from having a presence on our apps.”
Source 1
03.25.2021
Zuckerberg testified to this despite internal findings that suggested otherwise. Internal Facebook research from 2018 warned that the algorithm was designed to push “more and more divisive content in an effort to gain user attention & increase time on the platform.” A 2016 presentation from a Facebook employee found that... Read more
Source 1
Source 2
Source 3
03.25.2021
Congressional Testimony
Hate Speech
Meta
CEO Adam Mosseri explained that terms in violation of Instagram’s community guidelines were “removed from Instagram entirely” and therefore not findable via the search engine.
Source 1
08.25.2021
Facebook's search feature will automatically suggest and auto-fill terms. When the Anti-Defamation League (ADL) entered the names of recognized hate groups into the search bar, they found that hate-related terms were automatically suggested, including five supposedly banned by the platform. In total, the ADL found 40 accounts and 71 hashtags... Read more
Source 1
Source 2
08.17.2023
Public Announcement
Hate Speech
Meta
In the two years leading up to the 2020 election, Meta released more that 30 statements explaining which measures the platform was taking to mitigate misinformation, foreign interference, and hate speech relating to the election.
Source 1
01.01.2018
Despite their claims, a report by the online advocacy group Avaaz found that Facebook only ramped up its efforts to combat election-related false information in the few weeks leading up to the election. Avaaz estimates that Facebook could have prevented more than 10 billion views on the top 100 election-related... Read more
Source 1
11.03.2020
Public Announcement
Elections
Meta
Meta’s hate speech policy disallows attacks or generalizations based upon a person’s protected characteristics.
Source 1
08.26.2019
A report from the Institute for Strategic Dialogue (ISD) on online gendered abuse in 2022 found, of all the platforms studied by ISD, Facebook hosted some of the highest rates of misogynistic and abusive activity after the reversal of Roe v. Wade. 34% of the top posts on the topic... Read more
Source 1
01.01.2022
Terms and Conditions
Hate Speech
Meta
Prior to 2022, Meta’s policy on political ads prohibited claims on Facebook or Instagram that the election was stolen or fraudulent.
Source 1
10.01.2022
In 2022, Meta altered this policy to apply only to upcoming or ongoing elections, allowing claims of a rigged 2020 election to run gain even more traction on its platforms.
Source 1
Source 2
11.15.2023
Terms and Conditions
Elections, Trust and Safety
Meta
To address misinformation about climate change on its platform, Meta said it would attach labels to posts discussing climate change.
Source 1
02.18.2021
The Center for Countering Digital Hate identified a ‘Toxic Ten’ of climate disinformation spreaders on the platform who were responsible for 69% of users’ interactions with climate change. Facebook failed to label 92% of this content.
Source 1
11.02.2021
Public Announcement
Trust and Safety
Meta
Per Meta’s policies, advertisements targeted to minors may not promote “products, services or content that are inappropriate, illegal, or unsafe, or that exploit, mislead, or exert undue pressure on the age groups targeted.”
Source 1
08.16.2021
Research from the Center for Countering Digital Hate found that ads promoting abortion reversals, a dangerous procedure, were attached to 83% of searches for abortion on the platform. CCDH estimates that minors saw these types of ads over 700,000 times.
Source 1
09.14.2021
Terms and Conditions
Trust and Safety
Meta
Buying or selling user privileges on Facebook, Instagram, or WhatsApp is explicitly prohibited in Meta’s spam policy.
Source 1
10.31.2019
The Tech Transparency Project found hundreds of Facebook Groups designated to buying and selling Facebook manager accounts. Many of these accounts had been approved to run ads on political and social issues, which may have attracted buyers looking to interfere in elections.
Source 1
11.14.2022
Terms and Conditions
Transparency, Trust and Safety
Meta
Per Instagram’s community guidelines, “it's never OK to encourage violence or attack anyone based on their…sex, gender, gender identity, sexual orientation,” including the use of the word "groomer" to describe anyone from the LGBT community. Per Meta’s Advertising Standards, all ads on Instagram must adhere to the platform’s community guidelines.... Read more
Source 1
Source 2
Source 3
04.19.2018
Media matters found that ads misusing the term in anti-LGBT ads still ran, garnering almost 1 million impressions from 63 ads.
Source 1
02.21.2023
Terms and Conditions
Hate Speech
Meta
Criminalizing individuals based on their immigration status is not allowed on Meta’s platforms, nor is white nationalism.
Source 1
Source 2
11.21.2019
Analysis from Media Matters found that “Meta has earned at least $397,500 from 450 ads pushing anti-immigrant “invasion” rhetoric since October 2023.” Of the 450, many also contained white nationalist rhetoric.
Source 1
02.09.2024
Terms and Conditions
Hate Speech
Meta
Meta has touted that “no tech company does more or invests more to protect elections online.”
Source 1
11.28.2023
Despite this claim, the European Union is investigating Meta for its failure to comply with the Digital Services Act, specifically its failures to mitigate risks to electoral processes on the platform.
Source 1
04.30.2024
Public Announcement
Elections
Meta
Meta banned the word ‘groomer’ when referring to the LGBT community, claiming this misuse of the word violated its hate speech policies
Source 1
07.20.2022
Despite this policy, Instagram still chose to reinstate the account ‘Gays Against Groomers,’ an account that repeatedly attacked the LGBT community and alleged they were “normalizing pedophilia.” Media Matters reports that Gays against Groomers repeatedly equated the community as groomers, as well as mentally and morally deficient.
Source 1
10.27.2023
Public Announcement
Hate Speech
Meta
A 2019 blog post explicitly banned white nationalism and separatism, including groups associated with these ideologies.
Source 1
03.27.2019
The Anti-Defamation League found that 69 of the 130 known hate groups had a presence on Instagram. 51 of them were findable by search on Facebook, despite all of them in violation of Meta’s policies.
Source 1
08.16.2023
Public Announcement
Hate Speech
Meta
After the 2018 Christchurch attack, where an Islamophobic massacre was live streamed on Facebook, Meta promised to "start connecting people who search for terms associated with white supremacy to resources focused on helping people leave behind hate groups."
Source 1
03.27.2018
The Anti-Defamation League investigated this policy by searching for 130 known hate groups on Facebook. They found that only 20 (15%) of the searches produced a warning label or redirected the search.
Source 1
08.16.2023
Public Announcement
Radicalization/Extremism
Meta
In 2021, Instagram launched the ability to add links to Stories, framing it as a feature for “businesses, creators and change-makers” to “inspire their communities.”
Source 1
10.27.2021
The feature often allows users to spread disinformation, and even profit from it. The journalism watchdog group Media Matters tracked instances of antivaxxergroups using the feature to organize in-person events, spread misinformation, and sell anti-vaccine merchandise.
Source 1
01.24.2022
Product Launch
Trust and Safety
Meta
In 2021, in response to racist abuse being thrown at Black footballers in the UK, Instagram claimed to have strengthened their policies to protect against “common antisemitic tropes and other types of hate speech.”
Source 1
02.11.2021
Instagram allowed posts declaring it “antisemitic month” to remain on the platform and found that they were not in violation of community guidelines when first reported.
Source 1
05.18.2023
Public Announcement
Hate Speech
X (Twitter)
A blog post about Twitter’s efforts regarding the 2020 election promises “tweets meant to incite interference with the election process or with the implementation of election results, such as through violent action, will be subject to removal [including] all Congressional races and the Presidential Election.”
Source 1
10.09.2020
Leaked documents from the Jan. 6th committee reveal that Twitter whistleblowers voiced concerns about the platform being used to incite violence. While whistleblowers repeatedly requested that Twitter take action against “coded incitements to violence” on the platform, no substantial action was taken by management.
Source 1
01.06.2021
Public Announcement
Elections, Radicalization/Extremism
X (Twitter)
Twitter said it would take action against links containing content that promotes hateful conduct.
Source 1
07.01.2020
More than a year after this post, the journalism watchdog group Media Matters identified a network of more than 30 Instagram accounts that collaborate and coordinate overtly racist activity. These accounts promoted white supremacist ideology and often espoused the “great replacement” conspiracy theory that has inspired multiple racial attacks, including... Read more
Source 1
11.30.2023
Public Announcement
Radicalization/Extremism
X (Twitter)
In a blog post on the 2020 general election, Twitter assured that they do not allow “anyone to use Twitter to manipulate or interfere in elections or other civic processes.”
Source 1
10.09.2020
However, a report from the research organization RAND found a high prevalence of both troll and super connectors accounts engaging in disinformation about the 2020 election. This network of more than 300,000 suspicious accounts were mostly balanced between the political-left and political-right, suggesting that they were created to stoke domestic... Read more
Source 1
11.03.2020
Public Announcement
Elections, National Security
X (Twitter)
In March 2020, with the coronavirus spreading rapidly, Twitter claimed it would remove misleading claims about the virus.
Source 1
03.16.2020
A NewsGuard report found multiple Twitter handles — including some that were verified — spreading misinformation about the pandemic. Robert Kennedy Jr., who is also verified, was also been identified as a super spreader of COVID-19 misinformation.
Source 1
Source 2
05.20.2020
Public Announcement
Trust and Safety
X (Twitter)
In response to congressional investigations about Russian interference into the 2020 election (Twitter sold $275,000 worth of ads to Russia's state-backed RT news agency), Twitter unveiled an "industry-leading transparency center" through which it offered "everyone visibility into who is advertising on Twitter, details behind those ads" and tools through which... Read more
Source 1
11.06.2017
In 2021, Twitter quietly disabled its Ad Transparency center, claiming it not longer “provides its original intended value.” This was a major blow for public interest researchers.
Source 1
01.25.2021
Product Launch
Transparency
X (Twitter)
In September 2023, X touted its “ongoing commitment to combat antisemitism” as part of the company's larger commitment to combat “hate, intolerance, and prejudice.”
Source 1
09.08.2023
The Tech Transparency Project found that white supremacists leveraged conversations on X about the Israel-Hamas conflict to spread antisemitic content such as the Great Replacement theory.
Source 1
11.16.2023
Public Announcement
Hate Speech
X (Twitter)
X’s policy on violent organizations prohibits terrorist organizations on the platform.
Source 1
04.01.2023
A Tech Transparency Project report found multiple X accounts affiliated with Hezbollah, a designated terrorist organization. Not only were these accounts on X but they were also verified.
Source 1
02.14.2023
Terms and Conditions
Radicalization/Extremism
X (Twitter)
X’s hateful conduct policy disallows attacking other users based on sexual orientation, gender, or gender identity. Their policy on abuse and harassment additionally states that these actions are not allowed on the platform.
Source 1
11.15.2016
An investigation by the Institute for Strategic Dialogue into online gendered abuse on X found that: • Misogynistic or abusive tweets comprised 10% of the top 100 tweets (by retweets) discussing Liz Cheney and the Jan. 6 hearings. • Tweets about WNBA player Brittney Griner often “misgendered and dehumanized” her... Read more
Source 1
01.01.2022
Terms and Conditions
Hate Speech
X (Twitter)
In 2017, Twitter testified before the Senate Judiciary Committee that the platform was “actively engage[d] with civil society and journalistic organizations on the issue of misinformation.”
Source 1
10.31.2017
Under Musk’s ownership, the platform has repeatedly sued nonprofits that conduct misinformation research, claiming that they are meddling to discredit the platform and hurt advertising revenue. This includes a lawsuit against the Center for Countering Digital Hate that was costly and damaging, but ultimately dismissed. In the first paragraph of... Read more
Source 1
08.07.2023
Congressional Testimony
Trust and Safety
X (Twitter)
Twitter’s hateful conduct policy prohibits targeting users based on protected characteristics such as ethnicity or religious affiliation. In a blog post, Twitter added that it had trained its content moderators on “cultural and historical contextualization of hateful conduct.”
Source 1
11.15.2016
Twitter has come under fire multiple times for failing to remove posts that are antisemitic and refer to the Holocaust as a "hoax." A report from the Institute for Strategic Dialogue found 19,000 pieces of content on Twitter denying the Holocaust, all created in a two-year timespan from June 2018 to July 2020.
Source 1
06.01.2018
Terms and Conditions
Hate Speech
X (Twitter)
Per X’s Hateful Conduct Policy, users “may not directly attack other people on the basis of race,...sexual orientation, gender, [or] gender identity."
Source 1
07.09.2019
The Center for Countering Digital Hate found that daily use of the n-word tripled on X after Musk took over, and the use of slurs against gay men and trans persons rose 58% and 62%, respectively.
Source 1
12.22.2022
Terms and Conditions
Hate Speech
X (Twitter)
In 2017, in response to congressional concern about Russian interference in the 2020 election, Twitter testified before the House Intelligence Committee that the company was making changes to its platform to stop foreign malign influence operations on its platform.
Source 1
11.01.2017
Later that year, a report prepared for the Office of Naval Research found that Russian agents used Twitter to inflame domestic divisions in the U.S. and spread divisive content. The accounts engaged in conversations around #BlackLivesMatter, and on presidential candidates Trump and Clinton.
Source 1
08.14.2018
Congressional Testimony
National Security
X (Twitter)
X’s Help Center claims its “responsibility to reduce the spread of potentially harmful misinformation.”
Source 1
03.01.2023
Results from a 2023 TrustLab study often identified Twitter in their findings. The study of several European Union countries found: that (1) Twitter had the highest level of discoverability of mis/disinformation among platforms studied, (2) Mis/disinformation on Twitter received the most engagement on the site, and that (3) Twitter had... Read more
Source 1
Source 2
09.01.2023
Public Announcement
Trust and Safety
X (Twitter)
Synthetic misleading media violates X’s policies.
Source 1
04.01.2023
There have been multiple instances of deepfakes, a type of synthetic media generated by AI, garnering extensive engagement on the platform before they are addressed. Fake images of an explosion from the Pentagon went viral, which coincided with a brief dip in the stock market. This popularity was likely extrapolated... Read more
Source 1
Source 2
Source 3
05.22.2023
Terms and Conditions
Trust and Safety
X (Twitter)
Musk ensured that his platform would not perpetuate support for fraudulent election claims.
Source 1
05.16.2023
However, “the 10 most widely shared tweets promoting a “rigged election” narrative in the five days following Trump’s town hall…collectively amassed more than 43,000 retweets.”
Source 1
05.16.2023
Public Announcement
Elections
X (Twitter)
Per X’s hateful conduct policy, anti semitism is not allowed. Additionally, Musk promised that tweets containing hate would be “max deboosted,” meaning that the algorithm would not promote or recommend these types of posts.
Source 1
Source 2
11.18.2022
Analysis from the Institute for Strategic Dialogue (ISD) found that antisemitic content increased on Musk’s Twitter. ISD found more than 325K possibly antisemitic tweets circulated in an eight month period after Musk acquired Twitter. Contrary to Musk’s claim, ISD found an insignificant decrease in the engagement of these tweets.... Read more
Source 1
03.20.2023
Public Announcement
Hate Speech
X (Twitter)
X’s policy on hateful conduct prohibits spreading harmful stereotypes, inciting harassment, or encouraging discrimination of protected categories, such as religious affiliation.
Source 1
04.01.2023
It appears the platform's very owner is violating these policies. Since Musk took ownership of the platform, he has been amplifying antisemitic content, echoing the great replacement theory, and garnering the approval of multiple white nationalists.
Source 1
11.16.2023
Terms and Conditions
Hate Speech
X (Twitter)
Since 2018, Twitter’s hateful conduct policy prohibited intentional “misgendering or deadnaming of transgender individuals.”
Source 1
01.01.2018
In April 2024, X quietly removed this requirement from its hateful conduct policy.
Source 1
04.15.2024
Terms and Conditions
Hate Speech
X (Twitter)
In a blog post discussing rule-violating content, the company ensured they “continue to invest heavily in improving both the speed and comprehensiveness of our detections.”
Source 1
07.28.2022
Mass layoffs at the company’s election integrity teams days before the 2022 midterm elections raised concerns about the company’s ability to spot false narratives harming civic processes.
Source 1
11.04.2022
Public Announcement
Elections, Trust and Safety
YouTube
Per Google's ad policies, advertisers may not solicit viewers to pay for "official services that are directly available via a government or government delegated provider." Because U.S. voters can easily check their voter status on states' official websites free of charge, soliciting users to pay to check their voter status... Read more
Source 1
10.28.2022
Tech Transparency Project found a network of ads leading up to the 2022 midterm election misleading users about crucial election information. Some ads solicited users to pay to check their voter status.
Source 1
11.07.2022
Terms and Conditions
Elections
YouTube
In 2017, YouTube faced signficant public criticism for its failure to moderate harmful content, such as child abuse material. In a blog post, YouTube's CEO claimed the platform was “taking actions to protect advertisers and creators from inappropriate content” by “carefully considering which channels and videos are eligible for advertising.”
Source 1
Source 2
12.05.2017
A year later, this type fo content was still readily available on the platform. A CNN investigation found that YouTube ran ads of major companies from multiple industries and government agencies on videos promoting white supremacy, pedophilia, and propaganda.
Source 1
04.20.2018
Public Announcement
Radicalization/Extremism, Kids' Safety
YouTube
YouTube’s policies disallow “content that encourages dangerous or illegal activities that risk serious physical harm or death.”
Source 1
04.28.2022
Despite this, Tech Transparency Project found 435 videos on YouTube promoting militia activity, some promoting violent tactics. Several videos were affiliated with the Three Percenters, a militia group connected to Jan 6th. Other videos demonstrated militia training exercises with shooting drills.
Source 1
05.13.2022
Terms and Conditions
Radicalization/Extremism
YouTube
A brief from Google for the Supreme Court promised that “YouTube’s systems are designed to identify and remove prohibited content,” adding, “Since 2019, YouTube’s recommendation algorithms have not displayed borderline videos (like gory horror clips) that even come close to violating YouTube’s policies.”
Source 1
01.01.2019
A Tech Transparency Project investigation found that YouTube repeatedly recommended content depicting school shootings and serial killers to under-18 engagement accounts.
Source 1
05.16.2023
Congressional Testimony
Radicalization/Extremism, Kids' Safety
YouTube
Per their firearms policy, “content intended to sell firearms, instruct viewers on how to make firearms, ammunition, and certain accessories, or instruct viewers on how to install those accessories is not allowed on YouTube.”
Source 1
09.23.2021
A Tech Transparency Project investigation found its 14-year-old engagement account repeatedly exposed to videos about firearms after watching a series of gaming videos. These videos — many of which depicted school shootings scenes from movies or TV, instructional information such on assembling or aiming firearms, and content advertising firearms —... Read more
Source 1
05.16.2023
Terms and Conditions
Radicalization/Extremism
YouTube
In 2019, Youtube highlighted its multipronged efforts to decrease users’ exposure to harmful content, including limiting recommendations on hateful and supremacist content.
Source 1
06.25.2019
The Mozilla Foundation crowdsourced analysis of YouTube’s algorithm found that 71% of all videos reported to researchers came from YouTube’s recommendation algorithm. Reported videos were 40% more likely to come from recommendations as opposed to the search feature.
Source 1
07.01.2020
Public Announcement
Radicalization/Extremism
YouTube
YouTube’s harassment and cyberbullying policy disallows “content that contains prolonged insults or slurs based on someone's intrinsic attributes. These attributes include their protected group status [such as sex or gender, and] physical attributes.”
Source 1
05.01.2019
Analysts from the Institute for Strategic Dialogue found multiple concerning examples that suggest the prevalence of gendered abuse on the platform. Their findings include: (1) 19 channels (with a combined subscriber count of 390k) dedicated to posting Andrew Tate content, with hundreds of misogynistic comments posted (2) 361 videos about... Read more
Source 1
01.01.2022
Terms and Conditions
Hate Speech
YouTube
In 2019, YouTube’s hate speech and harassment policy was updated to “specifically [prohibit] videos alleging that a group is superior in order to justify discrimination, segregation or exclusion…[including] videos that promote or glorify Nazi ideology, which is inherently discriminatory."
Source 1
06.05.2019
When researchers from the Anti-Defamation League searched YouTube for 130 different hate groups and movements, they found that about a third had at least one channel on YouTube. The researchers found a total of 87 violative channels, some of which were more than ten years old.
Source 1
08.10.2020
Public Announcement
Radicalization/Extremism
YouTube
YouTube's search bar often auto-completes terms when users begin typing. Google clarified that its auto-predictions are not supposed to function for terms that violate their policy, including predictions "associated with the promotion, condoning or incitement of hatred against groups."
Source 1
05.13.2021
When researchers from the Anti-Defamation League searched YouTube for 130 different hate groups and movements, they found that the prediction feature suggested search terms for 36 of the 130 groups.
Source 1
05.01.2023
Terms and Conditions
Hate Speech
YouTube
In a 2020 blog post about government-backed disinformation, Google outlined its ongoing efforts regarding coordinated influence operations on its platforms. The company underscored their commitment to "swiftly remove such content from our platforms and terminate these actors’ accounts."
Source 1
05.27.2020
In 2023, researchers identified a pro-China influence campaign on YouTube with thousands of videos and millions of views. The videos, which amassed more than 100 million views and 700,000 subscribers, sometimes used generative A.I. to push narratives ridiculing the U.S. or praisining China. The researchers at the Australian Strategic Policy... Read more
Source 1
Source 2
12.14.2023
Public Announcement
National Security
YouTube
A Youtube blog post from 2019 clarified that the platform “will remove content denying that well-documented violent events, like the Holocaust...took place.”
Source 1
06.06.2019
An Institute for Strategic Dialogue report found 9,500 pieces of content mentioning ‘holohoax’ were created on the platform between 2018 and 2020.
Source 1
08.02.2021
Public Announcement
Radicalization/Extremism
YouTube
In written testimony, YouTube’s Vice President of Global Affairs claimed the platform “made significant investments over the past few years in policies, technology, and teams that help provide kids and families with the best protections possible.”
Source 1
10.25.2021
In 2023, Google laid off about a third of Jigsaw, their company that creates tools to address trust and safety-related concerns. YouTube’s parent company, Alphabet, has laid off approximately 12,600 people, although it is unclear which companies were affected the most.
Source 1
Source 2
01.20.2023
Congressional Testimony
Trust and Safety
YouTube
YouTube has policies against “prolonged insults” or hate speech on the basis of people’s protected attributes, including gender. Moreover, YouTube considers “deliberate misgendering as potentially violative of its monetization” guidelines.
Source 1
Source 2
Source 3
02.12.2024
Media Matters reported multiple monetized high-profile accounts violating these policies. These creators, who collectively have more than 23 million subscribers, all posted videos deadnaming or misgendering individuals that were eligible for monetization. These videos garnered over 15 million views. Ben Shapiro, who has the highest subscriber count in Media Matter's... Read more
Source 1
03.06.2024
Terms and Conditions
Hate Speech
YouTube
YouTube amended its misinformation policy to address abortions, including banning “content that contradicts local health authorities or WHO guidance” on the safety of “chemical and surgical abortion.”
Source 1
07.20.2022
The Institute for Strategic Dialogue uncovered multiple videos spreading false information about abortion pill reversals, a widely debunked and at times dangerous procedure.
Source 1
10.11.2022
Terms and Conditions
Trust and Safety
YouTube
After the 2020 election, YouTube said it would remove content denying the election’s outcome.
Source 1
12.09.2020
Years later, YouTube has given up on enforcing the removal of posts denying any past election results, concerning election officials about the effects this will have in 2024.
Source 1
Source 2
06.02.2023
Public Announcement
Elections, Trust and Safety
YouTube
Per Google’s ad policies, ads may not allow dangerous services or a misrepresentation of services.
Source 1
08.16.2021
The Center for Countering Digital Hate found that 83% of Google searches for abortions yielded ads for abortion reversals, a dangerous procedure. A quarter of ads came from anti-choice organizations falsely advertising as crisis pregnancy centers.
Source 1
06.15.2023
Terms and Conditions
Trust and Safety
YouTube
Youtube claims to apply its misinformation policies globally, regardless of language.
Source 1
10.06.2022
In both the 2020 and 2022 elections, Media Matters found that Spanish-language election misinformation persisted on the platform. Often, these videos would not be labeled with warnings, despite Youtube promises that they would be.
Source 1
Source 2
11.04.2020
Public Announcement
Elections
YouTube
In 2017, YouTube launched Super Chats, a feature that allows viewers to pay for their comments to be featured at the top of chats on live streams. This revenue is shared between the live stream host and YouTube.
Source 1
01.12.2017
In 2018, Buzzfeed reported on how Super Chats contributed to hateful speech flourishing in the comment sections of these livestreams. In response to the reporting, YouTube said it would review its policies regarding the feature. However, research from the Institute for Strategic Dialogue reveals that Super Chats are still perpetuating... Read more
Source 1
05.05.2022
Product Launch
Radicalization/Extremism, Elections
TikTok
TikTok’s election integrity policy emphasized its commitment to combating the spread of misinformation on the platform, which includes removing content if it “causes harm to individuals, our community or the larger public.” The policy provides examples of content they would remove, such as “false claims that seek to erode trust... Read more
Source 1
Source 2
08.05.2020
When an New York University study submitted disinformation ads to TikTok, the platform accepted 90% of them, many of which “contain[ed] the wrong election day, encouraging people to vote twice, dissuading people from voting, and undermining the electoral process.”
Source 1
10.01.2022
Terms and Conditions
Elections
TikTok
TikTok’s Community Guidelines tout “robust policies around specific types of misinformation like medical, climate change, and election misinformation, as well as misleading AI-generated content, conspiracy theories, and public safety issues like natural disasters.”
Source 1
Source 2
08.05.2020
NewsGuard investigated search results for popular news topics, such as Russia and Ukraine, COVID-19 vaccines, and school shootings. Their research yielded misinformed content 20 percent of the time.
Source 1
09.01.2022
Terms and Conditions
Trust and Safety
TikTok
In response to then-President Trump's executive order sanctioning TikTok in 2020, TikTok promised "that TikTok has never shared user data with the Chinese government, nor censored content at its request."
Source 1
10.07.2020
However, BuzzFeed’s reporting on leaked audio from internal meetings reveals that “eight different employees describe situations where U.S. employees had to turn to their colleagues in China.” The recording also reveals that “engineers in China had access to U.S. data between September 2021 and January 2022, at the very least.”... Read more
Source 1
06.17.2022
Congressional Testimony
National Security
TikTok
TikTok’s Community Guidelines do not allow users to "threaten or incite violence, or to promote violent extremism. We do not tolerate discrimination: content that contains hate speech or hateful behavior has no place on TikTok.”
Source 1
12.21.2018
Media Matters analyzed TikTok’s recommendation system and found that the For You Page recommends extremist content such as QAnon, Patriot Party, and Three Percenters. After these accounts were followed, TikTok would suggest other accounts with similar extremist ideology, such as that of the Oath Keepers. Moverover, an Institute for Strategic... Read more
Source 1
Source 2
03.26.2021
Terms and Conditions
National Security, Radicalization/Extremism
TikTok
TikTok’s explicitly disallows “human trafficking and smuggling.”
Source 1
09.15.2022
The Tech Transparency Project found that human smugglers often advertised their services on TikTok. Researchers found that searching for "viajes USA" (USA trips) into the platforms search bar yielded dozens of accounts advertising related services.
Source 1
09.15.2022
Terms and Conditions
National Security, Trust and Safety
TikTok
In response to reports about incel culture on TikTok, a spokesperson said, "hate has no place on TikTok, and we do not tolerate any content or accounts that attack, incite violence against or otherwise dehumanise people on the basis of their gender. We work aggressively to combat hateful behavior by... Read more
Source 1
11.21.2021
While TikTok bans the search of the term 'incel,' the Global Network on Extremism and Technology (GNET) how the incel community has continued to develop a significant presence on the platform and has easily adapted to avoid content moderation. Today, users employ a variety of tactics to evade moderation, including... Read more
Source 1
05.07.2024
Public Announcement
Radicalization/Extremism
TikTok
TikTok tweeted that the company “has never been used to 'target' any members of the U.S. government, activists, public figures or journalists.”
Source 1
10.20.2022
A 2024 Director of National Intelligence report found that "TikTok accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022.” Forbes reported that these accounts stoked bipartisan divides about candidates and called into question policy decisions made by... Read more
Source 1
Source 2
11.01.2022
Public Announcement
National Security
TikTok
TikTok claims to “label the accounts and videos of media entities that we know to be subject to editorial control or influence by state institutions.”
Source 1
01.18.2023
The Alliance for Securing Democracy (ASD) found that Russia has been using TikTok to "push its own narrative” in diminishing Western support for Ukraine during the war. ASD’s research also identified 31 news accounts that were Russian-funded but not labeled. Research from Brookings also found that Russian state-affiliated TikTok accounts... Read more
Source 1
Source 2
03.30.2023
Terms and Conditions
National Security
TikTok
A TikTok spokeswoman promised the platform would “continue to respond to the war in Ukraine with increased safety and security resources to detect emerging threats and remove harmful misinformation.”
Source 1
03.05.2022
Despite this claim, misinformation about the war in Ukraine has continued to be abundantly available on the platform, perpetuated by 13,000 fake accounts with more than one million combined followers. Videos amplified pro-Russian narratives, falsely posed as news outlets, or alleged falsehoods about corrupt Ukrainian officials.
Source 1
Source 2
12.14.2023
Public Announcement
National Security
TikTok
TikTok ensures its efforts to “promote a safe and age-appropriate experience for teens 13-17” and combat medical misinformation surrounding COVID-19.
Source 1
Source 2
05.12.2021
A NewsGuard investigation analyzed the For You Pages of nine minors on TikTok and found that all but one of the accounts were exposed to COVID-19 misinformation, with some videos implying that the vaccine kills people.
Source 1
09.20.2021
Terms and Conditions
Trust and Safety, Kids' Safety
TikTok
TikTok said it would start implementing banners on all videos containing COVID-19 vaccine content to discourage the spread of misinformation.
Source 1
12.15.2020
When the Institute for Strategic Dialogue analyzed more than 6,000 videos discussing the COVID-19 vaccine, they found that 58% of them lacked banners. Of the videos analyzed containing the hashtag #NoToTheJab, this percentage increased to 76%. Of the analyzed audio containing anti-vaccine misinformation, that percentage increased to 83%.
Source 1
11.04.2021
Public Announcement
Trust and Safety
TikTok
Speaking about the platform's community guidelines, a TikTok spokesperson highlighted how they specifically call out misogyny as a hateful ideology [and are] crystal clear that this content is not allowed on our platform.” The platform helped misogynistic influencer Andrew Tate gain a following with billions of views, until he was... Read more
Source 1
Source 2
Source 3
08.20.2022
Despite Tate’s ban, and the ban of influencer Sneako for similar violent and misogynistic behavior, TikTok continues to host content glorifying both figures. Although their accounts are now banned from the app, hashtags referencing them have millions of views, and there are hundreds of fan accounts.
Source 1
01.10.2024
Public Announcement
Radicalization/Extremism
TikTok
Per TikTok’s community guidelines, “general conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as 'the government; or a 'secret society;” are not eligible for the platforms For You Page. TikTok also claims it will remove conspiracy theories... Read more
Source 1
03.01.2023
Media Matters made TikTok accounts and interacted with tradwife content, which refers to content that espouses traditional gender roles. Afterwards, their For You Page (TikTok's feed based entirely on algorithmic recommendations) contained multiple far-right conspiracy theories. Among these posts were false claims about the upcoming implementation of martial law and... Read more
Source 1
05.01.2024
Terms and Conditions
Trust and Safety, Radicalization/Extremism, Hate Speech
TikTok
According to TikTok’s policies, organizations that promote violence on or off the platform are not allowed.
Source 1
01.12.2021
A Media Matters investigation found that two prominent militia groups — the Three Percenters and American Patriot Women — had an active presence on TikTok. Some of their content was searchable and available on the platform’s For You Page (TikTok's personalized feed based entirely on algorithmic recommendations). Similarly, hashtags relating... Read more
Source 1
Source 2
01.12.2021
Terms and Conditions
Radicalization/Extremism
TikTok
TikTok's Head of Trust and Safety wrote that "our goal is to identify and remove violative content as swiftly as possible, and ...to help us achieve this, we deploy a combination of automated technology and skilled human moderators who can make contextual decisions on nuanced topics like misinformation, hate speech,... Read more
Source 1
Source 2
03.31.2023
A Guardian investigation found that moderators are often asked to review content that is not in their language, which begs the question of how effective TikTok’s language moderation is. While there was previously a button for moderators to indicate that a language was not in their language, this option was... Read more
Source 1
Source 2
Source 3
Source 4
12.21.2023
Public Announcement
Trust and Safety
Meta
Meta's Instagram, Facebook, and Threads all have Terms and Conditions that prohibit voter interference, threats of violence, and election misinformation.
Source 1
Source 2
Source 3
12.05.2023
Meta's commitments to uphold election integrity don't extend to all of their platforms. As reported by Politico, WhatsApp Channels (a platform that transforms WhatsApp's private messaging into a one-way broadcasting tool) have community guidelines that have limited clarity around election policies and do little to disallow election-related misinformation.
Source 1
06.07.2024
Terms and Conditions
Trust and Safety
Meta
To comply with Meta's policies, advertisers using the platform must provide a government ID and residential address as part of their authorization process. Furthermore, advertisers must provide disclaimers that explain the entity placing the ad.
Source 1
Source 2
01.03.2020
A report from the Institute for Strategic Dialogue (ISD) identified the Patriots Run Project (PRP), a network of 26 domains, 10 websites, 15 Facebook pages, and 13 linked Facebook groups pushing anti-establishment politicians and falsehoods about election results and elected officials. Although the group claimed to be run by citizens... Read more
Source 1
06.13.2023
Terms and Conditions
Trust and Safety
TikTok
TikTok claims to "promote a caring working environment for employees, and trust and safety professionals especially. We use an evidence-based approach to develop programs and resources that support moderators’ psychological well-being."
Source 1
Source 2
11.05.2023
A Guardian investigation found that moderators are subjected to extreme working conditions. Moderators often felt extremely overwhelmed in their positions and were expected to meet strict productivity standards, with software that tracked their activity and would lock their computers after just five minutes of idleness. According to moderators, their “speed... Read more
Source 1
12.21.2023
Public Announcement
Trust and Safety
Meta
Meta disallows coordinated inauthentic behavior, which is when actors use a mixture of authentic, fake, and duplicated accounts to deceive others about their identities and spread a message.
Source 1
12.06.2018
The Tech Transparency Project found multiple Instagram accounts purporting to be legitimate "pharmacies" that were in actuality connecting minors to counterfeit prescription pills.
Source 1
12.07.2021
Terms and Conditions
Kids' Safety
TikTok
Climate change misinformation that “undermines well-established scientific consensus” is not allowed on TikTok.
Source 1
04.19.2023
The BBC investigated the enforcement of this policy and found that TikTok failed to removed 95% of the content they flagged as containing climate misinformation. These posts collectively garnered almost 30 million views. Another report by Media Matters found that Spanish language climate misinformation was largely unmoderated on the platform,... Read more
Source 1
Source 2
06.29.2023
Public Announcement
Radicalization/Extremism
Meta
Meta claims that its automated tools for detecting and removing harmful content are highly effective. The company’s 2023 Community Standards Enforcement report determined that Meta’s proactive detection technology removed 87.8% of bullying and harassment content, 99% of child exploitation content, and 95% of hate speech before users reported it. Meta... Read more
Source 1
Source 2
Source 3
08.11.2020
In 2023, Meta whistleblower Arturo Béjar revealed that Meta’s reported figures apply only to the content that the company ultimately removes, which is very different from the totality of violative content. This is a major sleight of hand. Furthermore, to grade its own homework, Meta used a measurement called prevalence:... Read more
Source 1
Source 2
10.17.2021
Terms and Conditions
Kids' Safety
Meta
Facebook’s Messenger Kids app was built with the promise that children wouldn't be able to talk to users who haven't been approved by their parents. CEO Mark Zuckerberg referred to the chat platform as “industry-leading work” and “better and safer than alternatives.”
Source 1
Source 2
09.04.2017
Despite Facebook’s promises, a flaw in Messenger Kids allowed thousands of children to be in group chats with users who hadn’t been approved by their parents. Facebook tried to quietly address the problem by closing violent group chats and notifying individual parents. The problems with Messenger Kids were only made... Read more
Source 1
07.22.2019
Public Announcement
Kids' Safety
Meta
In order to comply with the Children's Online Privacy Protection Act, Meta’s own Codes of Conduct prohibit users under the age of 13 from signing up for an Instagram or Facebook account. In his 2021 testimony before the Senate Commerce Committee, Instagram head Adam Mosseri reiterated, “If a child is... Read more
Source 1
Source 2
12.08.2021
According to the unsealed legal complaint brought by 33 state attorneys general against Meta, the company has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 yet it “disabled only a fraction” of those accounts. Instagram, in particular, actively... Read more
Source 1
Source 2
Source 3
Source 4
11.22.2023
Testimony
Kids' Safety
Meta
In a 2021 blog post, CEO Mark Zuckerberg rebuked claims that Meta was operating in secrecy by saying that the company had “established an industry-leading standard for transparency and reporting.”
Source 1
10.05.2021
Facebook did operate Crowdtangle, a leading data analytics and social monitoring tool that allowed academics, watchdog organizations, and journalists to identify harmful content on the platform, including CSAM. But in 2024, Facebook shut down Crowdtangle. It did so by quietly assigning or removing team members, including the tool’s former CEO... Read more
Source 1
Source 2
Source 3
Source 4
08.18.2022
Public Announcement
Kids' Safety
Meta
In response to a 2023 report by the Guardian, a Meta spokesperson said, “The exploitation of children is a horrific crime – we don't allow it and we work aggressively to fight it on and off our platforms.”
Source 1
04.27.2023
In 2024, the Wall Street Journal Reported that two internal teams raised concerns that Meta’s subscriber feature sold exclusive content from child influence "to an audience that was overwhelmingly male and often overt about sexual interest." The Guardian’s 2023 investigation confirmed that Facebook and Instagram were still operating as major... Read more
Source 1
Source 2
02.22.2024
Public Announcement
Kids' Safety
Meta
A spokesperson for Meta, which owns Instagram, said that keeping young people safe was the company's top priority. "We use advanced technology and work closely with the police and CEOP [Child Exploitation and Online Protection] to aggressively fight this type of content and protect young people." A year later, Facebook’s... Read more
Source 1
Source 2
03.01.2019
In 2019, the National Society for the Prevention of Cruelty to Children found that Instagram was the #1 platform for child grooming in the UK; they identified more than 5,000 crimes of sexual communication with children and a 200% increase in how Instagram was used to abuse children, all in... Read more
Source 1
Source 2
Source 3
01.01.2020
Public Announcement
Kids' Safety
Meta
Meta explicitly prohibits material that sexually exploits or endangers children, including any transactions or content that involves trafficking, coercion, sexually explicit language, and non-consensual acts.
Source 1
12.22.2022
According to investigations by the Wall Street Journal and researchers at Stanford University and the University of Massachusetts Amherst, Instagram’s recommendation system and hashtags help promote a vast network of pedophiles and guide them to content sellers. A 2022 study by the National Center On Sexual Exploitation found that 22%... Read more
Source 1
Source 2
Source 3
Source 4
Source 5
06.07.2023
Terms and Conditions
Kids' Safety
Meta
Meta has long purported to value the mental health of its young users, including and especially teenage girls. In a 2021 blog post, Zuckerberg wrote that in “serious areas like loneliness, anxiety, sadness, and eating issues -- more teenage girls who said they struggled with that issue also said Instagram... Read more
Source 1
10.05.2021
Zuckerberg wrote this despite the fact that internal presentations from March 2020 found that Instagram caused negative body image perceptions for a third of girls, as reported by the Wall Street Journal. Internal researchers at Meta warned that Instagram’s monetization of “face and body,” the pressure to look a certain... Read more
Source 1
03.01.2020
Public Announcement
Kids' Safety
Meta
Meta has clearly stated that it removes content that depicts or encourages suicide or self-injury, including graphic imagery and real-time depictions. This includes promising to place a sensitivity screen over content that doesn't violate its policies but may still be upsetting to some users.
Source 1
07.01.2019
Instagram’s internal research found that “13% of UK teenagers and 6% of US users” traced a desire to kill themselves back to Instagram. The BBC found that Instagram “removed almost 80% less graphic content about suicide and self-harm” during the height of the COVID-19 pandemic. Despite these findings in 2020... Read more
Source 1
Source 2
Source 3
09.15.2021
Terms and Conditions
Kids' Safety
Meta
Amid criticisms of its platforms, Meta has rolled out some 30 parental controls to manage who their kids can talk to or how much time they spend on Facebook and Instagram.
Source 1
06.27.2023
Most of the parental controls require both the parent and the minor to opt-in. While parents can supervise some of their teen’s activities and time spent on the app, they have no ability to limit the time spent on the apps.
Source 1
Source 2
06.27.2023
Product Launch
Kids' Safety
Meta
Instagram’s Codes of Conduct prohibit bullying and offensive comments, and the platform makes a strong showing of its commitment.
Source 1
Source 2
09.22.2020
Unredacted documents in New Mexico’s lawsuit against Meta show that a 2021 internal Meta estimate found as many as 100,000 children every day received sexual harassment. This finding came as the company “dragged its feet” on implementing new safeguards for minors and showed a “historical reluctance” to keep children safe,... Read more
Source 1
01.01.2021
Terms and Conditions
Kids' Safety
Meta
Both Instagram and Facebook promise to allow users to report harmful or upsetting content.
Source 1
Source 2
11.06.2019
After 2019, internal Meta documents show that the company added steps to the reporting process to discourage users from filing reports. And while users could still flag things that upset them, Meta shifted resources away from reviewing them. Meta said the changes were meant to discourage frivolous reports and educate... Read more
Source 1
11.02.2023
Public Announcement
Kids' Safety
Snapchat
In 2017, Snapchat launched Snap Map, a feature that allows users to share their current location and see where their friends are. Snap Map was supposed to help bridge the gap between social media and the real world by bringing users together in person.
Source 1
06.21.2017
When launched, Snap Map displayed users’ location automatically. To disable Snap Map from revealing their location, users had to go into their settings to “ghost” themselves so their friends cannot see their location. This feature has exposed minors to severe (and predictable) harms, including stalking, predation, and sexual assault. In... Read more
Source 1
Source 2
Source 3
07.25.2018
Product Launch
Kids' Safety
Snapchat
Snapchat claims it is committed to fighting the national fentanyl poisoning crisis. That means using “cutting-edge technology” to help proactively find and remove drug content and accounts, as well as working with law enforcement and other groups to raise awareness of drug issues, fentanyl, and counterfeit pills.
Source 1
01.25.2023
Over 60 family members of children who obtained illegal drugs through Snapchat are suing the company. In all but two cases, the child died after ingesting the drugs, many of which were laced with fentanyl. Some of Snapchat's features that set it apart from other apps — such as automatically... Read more
Source 1
Source 2
01.03.2024
Public Announcement
Kids' Safety
Snapchat
In 2013, Snapchat introduced the “Speed Filter,” which let users capture how fast they are moving and share it with friends. Snap says it is “deeply committed to the safety and well-being of our community, and our teams, products, policies, and partnerships apply safety by design principles to keep Snapchatters... Read more
Source 1
01.01.2013
The filter was connected to several deadly car crashes, including a 2017 case where three men — two 17-year-olds and a 20-year-old — died when a car crashed into a tree. "One Snap captured the boys' speed at 123 mph," according to court documents, as covered by the BBC. Even... Read more
Source 1
Source 2
Source 3
02.25.2020
Product Launch
Kids' Safety
Snapchat
Snapchat strictly prohibits bullying and harassment of any kind, and explicitly names these as values in the company’s guidelines.
Source 1
01.11.2018
Despite its proclamations, Snapchat facilitated applications like Yolo and LMK that allowed users to hide their identities. These apps — which are integrated into the Snapchat messaging platform through Snap Kit (the company’s suite of tools for third-party developers), have greatly contributed to bullying. In 2020, cyberbullying facilitated by these... Read more
Source 1
Source 2
Source 3
05.11.2021
Terms and Conditions
Kids' Safety
Snapchat
In order to comply with the Children's Online Privacy Protection Act, Snapchat explicitly prohibits users under the age of 13.
Source 1
07.13.2013
Research suggests that Snapchat is the most popular app for underage users. According to a study from Harvard’s T.H. Chan School of Public Health, Snapchat has nearly 3 million users under the age of 13. Underage users account for an estimated 13% of the platform’s usage. British regulators also found... Read more
Source 1
Source 2
03.06.2023
Terms and Conditions
Kids' Safety
Snapchat
Snap claims to be “deeply committed to the safety and wellbeing of its community,” including employing a number of wellbeing features to “educate and empower members of the Snapchat community to support friends who might be struggling with their own social and emotional wellbeing.”
Source 1
07.01.2021
Snapchat’s claims fly in the face of its very design. Popular filters automatically enlarge users’ eyes, lift their cheekbones, and lighten their skin. In-app additions like FaceTune allow users to perfect their features and share an unrealistic version of themselves with others. Snapchat and other image-based platforms make users desperate... Read more
Source 1
Source 2
Source 3
Source 4
06.12.2016
Public Announcement
Kids' Safety
Snapchat
Snap claims to “prohibit any activity that involves sexual exploitation or abuse of a minor, including sharing child sexual exploitation or abuse imagery, grooming, or sexual extortion (sextortion), or the sexualization of children.” The company says that it reports all identified instances of child sexual exploitation to authorities, including attempts... Read more
Source 1
Source 2
12.11.2022
The National Society for the Prevention of Cruelty to Children, a British child protection nonprofit, found that Snapchat is the site most used to share child abuse images, being used in 43% of cases where a social media site was flagged.
Source 1
02.22.2023
Terms and Conditions
Kids' Safety
TikTok
TikTok allows users to set their accounts to private, which gives users the ability to approve the people who can follow them, watch their videos, see their bio, and more. TikTok’s security policies prohibit users from sharing their login credentials with others
Source 1
Source 2
02.03.2021
In theory, these accounts are designed to protect the privacy of users. But in reality, they have often served as hubs for CSAM material. A 2022 Forbes investigation found that TikTok’s “private” accounts are serving as portals for CSAM and trafficking of underage users. The content is posted in private... Read more
Source 1
Source 2
11.14.2022
Terms and Conditions
Kids' Safety
TikTok
In order to comply with the Children's Online Privacy Protection Act, TikTok prohibits users under the age of 13. “If we believe someone under 13…we will ban their account.”
Source 1
Source 2
07.01.2013
TikTok’s own internal research shows that a third of its U.S. users are under 14. Researchers at Harvard’s T.H. Chan School of Public Health estimate that TikTok has more than 3 million users under the age of 13, and that these underage users account for 64% of the platform’s usage.... Read more
Source 1
Source 2
Source 3
05.14.2020
Terms and Conditions
Kids' Safety
TikTok
Ahead of a congressional hearing before the House Committee on Energy and Commerce where its CEO, Shou Zi Chew, was set to testify, TikTok announced a 60-minute watch limit for teen users. The limit will automatically alert users who are registered as under 18 once they've hit the one-hour mark... Read more
Source 1
03.23.2023
The time limit TikTok designed for teens is more for show — it doesn't prevent younger users from watching TikTok. In reality, teens spend around 1.5 hours on the app. At the same time, Douyin, the Chinese version of TikTok, has multiple measures that actually limit teens’ usage of the... Read more
Source 1
Source 2
Source 3
03.23.2023
Product Launch
Kids' Safety
TikTok
TikTok says it has a "zero tolerance" policy against child predators and grooming behaviors. That includes using automatic detection tools to prevent communications between minors and adults, and not allowing an account to receive or send direct messages if the user registers themselves as being under 16.
Source 1
03.05.2020
A 2020 investigation by the BBC found that TikTok allowed direct messages from older men to accounts of young female users who were clearly labeled as underage. The company also failed to remove the accounts of the men who continued to message the underage user, even after she told them... Read more
Source 1
Source 2
Source 3
11.01.2020
Public Announcement
Kids' Safety
TikTok
TikTok announced enhanced "Family Pairing" features, which would give parents greater control over their children's TikTok usage and Direct Messages.
Source 1
04.15.2020
Ireland’s Data Protection Commission found that this feature “failed to verify whether the user was actually the child user’s parent or guardian.” Instead, it allowed any adult to pair up with users under the age of 16, presenting obvious potential risks for children.
Source 1
09.15.2023
Product Launch
Kids' Safety
X (Twitter)
Elon Musk announced in 2022 that X had significantly increased its efforts to combat child sexual exploitation on its platform, and said addressing child sexual exploitation content on the social media platform was “Priority #1.”
Source 1
11.20.2022
Despite Musk’s claims that child safety was his number one priority for the platform, Australia’s eSafety Commissioner issued a report showing that, in the three months after Musk took ownership of the company, the platform’s automatic detection of child abuse material fell from 90% to 75%. While Musk claimed to... Read more
Source 1
Source 2
10.16.2023
Public Announcement
Kids' Safety
X (Twitter)
In a report for Australia’s eSafety Commissioner, Twitter claimed to proactively prevent CSAM through various tools. These tools supposedly detect CSAM imagery and videos in tweets and DMs and block URLs linking to known CSAM material in both public tweets and direct messages.
Source 1
01.31.2023
Australia's eSafety Commissioner report exposed the various instances in which the company lacks essential tools to prevent CSAM. Twitter stated to the Commissioner that it "is not a service used by large numbers of young people," but recognizes "that we need policies to protect against this." The company also admitted... Read more
Source 1
10.16.2023
Terms and Conditions
Kids' Safety
X (Twitter)
X claims to have a zero-tolerance policy on child exploitation.
Source 1
01.01.2022
Researchers at the Stanford Internet Observatory found that Twitter failed to take down dozens of images of child sex abuse. They identified 128 Twitter accounts selling child sex abuse material and 43 instances of known CSAM. Forbes found that illegal material remains alarmingly easy to find on Twitter, in multiple... Read more
Source 1
Source 2
Source 3
Source 4
09.29.2022
Terms and Conditions
Kids' Safety
X (Twitter)
X launched its new ID verification feature for X Premium subscribers, allowing paying users to confirm their identity through government-issued ID. This verification process purported to involve a matching system that uses both the user's license (or equivalent) and a selfie taken during the confirmation steps. The initiative aims to... Read more
Source 1
Source 2
09.15.2023
Despite the claims of "verification," users can obtain a verified account in the app by providing only a phone number and a bank account, without the need for ID confirmation. In addition, X makes claims about reducing impersonation and spam but doesn't offer verification tools to all users. X’s policy... Read more
Source 1
Source 2
Source 3
04.21.2023
Public Announcement
Kids' Safety
X (Twitter)
Twitter’s Application Programming Interface was once one of the internet’s leading research tools. A bedrock of the company’s transparency measures, free API access empowered critical research into topics such as democracy, child safety, public health, national security, mental health, crisis responses, and more. Free API access previously allowed researchers to... Read more
Source 1
Source 2
Source 3
Source 4
Source 5
09.20.2006
In May 2023, Twitter changed its policies and skyrocketed the cost for API access to $42,000 a month or more for an enterprise account. For many researchers and academic institutions, this cost proved too high. According to the Coalition for Independent Technology Research, ending free access to the API jeopardized... Read more
Source 1
Source 2
Source 3
Source 4
05.31.2023
Product Launch
Kids' Safety
X (Twitter)
X claims to prohibit the promotion or encouragement of suicide or self-harm. Violations of this policy include promoting or encouraging self-harming behaviors, seeking partners for group suicides or suicide games, and sharing information that aids self-harm or suicide.
Source 1
Source 2
11.30.2020
In December of 2022, Musk removed Twitter’s suicide-prevention hotline feature. After publication of the story, Musk then denied the feature had been removed, but Twitter’s head of trust and safety, Ella Irwin, confirmed the removal but said it was temporary. The feature was later readded. In December 2022, Business Insider... Read more
Source 1
Source 2
Source 3
12.23.2023
Terms and Conditions
Kids' Safety
Meta
Meta has an entire page dedicated to stopping sextortion. Sextortion, which occurs when someone "shares or threatens to share intimate images without consent," is against Meta's policies.
Source 2
Source 2
03.07.2023
Despite touting preventative measures to detect and prevent sextortion, in July 2024, Meta removed 63,000 accounts believed to be linked to Nigerian sextortion scammers. Additionally, Meta removed Facebook accounts, pages, and groups that discussed how to successfully blackmail victims. According to the FBI "sextoriton is one of the fastest growing... Read more
Source 1
05.31.2024
Terms and Conditions
Kids' Safety
X (Twitter)
In 2016, Twitter launched its Trust and Safety Council, an advisory group of around 100 independent safety advocates, academics, and researchers who would play a “foundational part” in ensuring safety and integrity on the platform, including addressing issues like hate speech, child exploitation, suicide, and self-harm. Upon taking control of... Read more
Source 1
Source 2
Source 3
02.09.2016
Musk later “changed his mind” about the formation of a new content moderation council. Former council members soon became the target of online attacks, in part spurred on by Musk’s criticism. Yoel Roth, the former head of Trust and Safety at Twitter, received online threats that Musk amplified, including the... Read more
Source 1
Source 2
12.13.2022
Public Announcement
Kids' Safety
Meta
In March 2024, Meta's Vice President of Global Affairs, Nick Clegg, emphasized the company's commitment to fighting the opioid crisis. On X, he pledged that Meta would "help disrupt the sale of synthetic drugs online" and "educate users about the risks." According to Meta's Community Standards, "high-risk drugs" that have... Read more
Source 1
Source 2
03.15.2024
The Tech Transparency Project (TTP) found that on that same day as Clegg's promise, ads selling prescription opioids ran on Instagram, Facebook, Messenger, and Meta's Audience Network, which runs ads on partner apps. In total, TTP found 452 high-risk drug ads running on these platforms, with TTP caveating that this... Read more
Source 1
06.14.2024
Public Announcement
Trust and Safety
X (Twitter)
In 2022, Twitter signed the EU's Code of Practice on Disinformation, a voluntary set of commitments made by platforms and fact-checkers to decrease the spread of misinformation on social media.
Source 1
06.22.2022
In May 2023, Twitter withdrew from the agreement. The company withdrew from this commitment despite having to comply with similar commitments when the EU's Digital Services Act went into effect in August 2023. Later in the year, studies showed that posts containing misinformation were more prevalent and discoverable on X... Read more
Source 1
05.26.2023
Terms and Conditions
Trust and Safety
X (Twitter)
Since April 2023, synthetic media has violated X's policies, with the exception of satirical posts with proper disclaimers. The platform also provides a mechanism called Community Notes, which allows users to add context to "potentially misleading posts" by adding labels or disclaimers to tweets. Only when Community Notes are approved... Read more
Source 1
Source 2
04.01.2021
Elon Musk reposted an AI-generated video to his personal X account which altered an existing video from the Harris campaign in which the Vice President appeared to call herself unqualified and the "ultimate diversity hire" while suggesting President Biden was senile. Musk posted the video to his 191 million followers... Read more
Source 1
07.26.2024
Terms and Conditions
Trust and Safety