GLAAD, the world’s largest lesbian, gay, bisexual, transgender, and queer (LGBTQ) media advocacy organization, has responded to a new decision announced by the Oversight Board, the body that makes pseudo-independent rulings about Facebook, Instagram, and Threads content moderation cases. GLAAD’s analysis of the ruling found a clear disagreement among the Oversight Board on the cases - the issued decision permits two posts containing anti-trans content, while notifying Meta that it must also remove anti-trans rhetoric it added to its hate speech policy in January. The Washington Post has reported that top Meta executives told the Oversight Board that the ruling should be “treated carefully” … “given the fraught political debate” about the rights of trans people in the U.S.
The Board ruled that the two harassing posts, one intentionally misgendering a transgender woman and the other intentionally misgendering a transgender girl, should remain (one on Facebook, the other on Instagram). They were both reshared by a prominent anti-LGBTQ account, inviting more visibility and harassment. The ruling expressly notes that the posts “misgender identifiable trans people,” yet asserts that the posts do not “represent bullying and harassment,” instead characterizing them as “public debate.” Although the majority of the Oversight Board supported Meta’s decision to allow the content, a minority expressly noted (in concurrence with GLAAD’s public comment submitted for the Board’s consideration) that the posts violate Meta’s Bullying and Harassment policy. The policy prohibits targeted misgendering of transgender people, stating: “all private minors, private adults (who must self-report), and minor involuntary public figures are protected from… claims about… gender identity.” Additional background is below, including the company’s caveats about who is protected by the policy and who is not.
Acknowledging Meta’s use of an anti-trans trope in its January 7, 2025 policy changes, and urging the company to remove it, the Oversight Board states: “Finally, the Board is concerned that Meta has incorporated the term ‘transgenderism’ into its revised Hateful Conduct policy. To ensure Meta’s content policies are framed neutrally and in line with international human rights standards, Meta should remove the term “transgenderism” from the Hateful Conduct policy and corresponding implementation guidance.” (“Transgenderism” is a popular right-wing anti-trans trope intended to falsely imply that being trans is an ideology.)
In September 2024, GLAAD submitted an official public comment regarding the two cases (“Gender Identity Debate Videos“), which address anti-transgender hate and harassment content. The Board’s ruling acknowledges Meta’s recent sweeping changes to its Hateful Conduct policy, including the removal of many policy protections for transgender people. Among the many rollbacks, the company said it will now allow calls “for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights” (e.g. “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism [sic] and homosexuality [sic]”). GLAAD has spoken out extensively about how this policy language is dehumanizing hate speech in itself which is attempting to normalize anti-LGBTQ bigotry.
While Meta rolled back LGBTQ protection aspects of its Hateful Conduct policy in January, its policy continues to state that it prohibits hate and harassment on the basis of sexual orientation and gender identity. Even in Meta’s own logic, and as a minority of the Oversight Board expressly agree, the posts clearly violate Meta’s policy which still prohibits targeted misgendering of transgender people (i.e. “claims about gender identity”). While Meta’s updated Hateful Conduct policy still prohibits hate and harassment of people on the basis of protected characteristics including gender identity, the company has stated that it will now primarily rely on user reports to identify “less severe policy violations.”
While there are many interesting and complex aspects to the case, this primary aspect above (Meta’s existing policy protecting people from “claims about … gender identity”) should have made the Oversight Board’s adjudication straightforward. Arcane facets of Meta’s policy enforcement considerations created disagreement amongst the Board: including the question of whether the targeted subjects must self-report the accounts who target them, and whether the subjects should be considered public figures. (The policy only protects: “private minors, private adults (who must self-report), and minor involuntary public figures.”) GLAAD has long advocated for the removal of these distinctions and requirements — everyone should be protected.
Read GLAAD’s full public comment here.
As in GLAAD’s 2024 SMSI report, Meta’s Facebook, Instagram, and Threads are largely failing to mitigate anti-LGBTQ hate and harassment. Meta’s enforcement failures have elicited longtime concern from the Oversight Board, trust and safety experts, human rights advocates, and Meta’s shareholders.
A 2024 GLAAD report found that Meta is failing to moderate extreme anti-trans hate across Facebook, Instagram, and Threads. The fourth annual GLAAD Social Media Safety Index & Platform Scorecard (SMSI)was released in June 2024. After reviewing six major platforms on 12 LGBTQ-specific indicators, all received low and failing scores: TikTok: 67%, Instagram: 58%, Facebook: 58%, YouTube: 58%, Threads: 51%, and X: 41%.
Key findings of the 2024 SMSI include:
- Anti-LGBTQ rhetoric on social media translates to real-world offline harms.
- Anti-LGBTQ hate speech and disinformation continues to be an alarming public health and safety issue.
- Platforms are largely failing to mitigate this dangerous hate and disinformation and inadequately enforce their own policies.
- Platforms disproportionately suppress LGBTQ content, including via removal, demonetization, and shadow-banning.
- There is a lack of effective, meaningful transparency reporting from the platforms.