Policy change led by an in-house expert on gender, technology + human rights
Bumble announced today that it has introduced a new policy that explicitly bans identity-based hate, an act that strengthens the stance the company has previously taken banning racist, transphobic, ableist, and body-shaming language. The company also announced today that it will take action against those who intentionally submit false reports due to someone’s identity, including removing repeat offenders from its platform.
The company defines identity-based hate as content, imagery, or conduct that promotes or condones hate, dehumanization, degradation, or contempt against marginalized or minoritized communities with the following protected attributes: race, ethnicity, national origin/nationality, immigration status, caste, sex, gender, gender identity or expression, sexual orientation, disability, serious health condition, or religion/belief.
“As a platform rooted in kindness and respect, we want our members to connect safely and free from the hate that targets them simply for who they are,” said Azmina Dhrodia, Bumble’s Safety Policy Lead. “We want this policy to set the gold standard of how dating apps should think about and enforce rules around hateful content and behaviors. We were very intentional to tackle this complex societal issue with principles celebrating diversity and understanding how those with overlapping marginalized identities are disproportionately targeted with hate.“
Dhrodia, an expert on gender, technology, and human rights, joined Bumble in 2021. Dhrodia previously worked on violence and abuse against women online at the World Wide Web Foundation and Amnesty International, as well as with various tech companies to create safer online experiences for women and marginalized communities.
“Our moderation team will review each report and take the appropriate action. Part of rolling out this policy included required implicit bias training and discussion sessions with all safety moderators to unpack how bias can exist when moderating content,” Dhrodia said. “We always want to lead with education and give our community a chance to learn and improve. However, we will not hesitate to permanently remove someone who consistently goes against our policies or guidelines.“
Identity-based hate is an issue that negatively affects many communities, and is something that increasingly many gender nonconforming folks, like trans and nonbinary people, have faced in online dating. A recent analysis done by Bumble, found that up to 90% of user reports it received about gender-nonconforming folks were eventually dismissed by its moderators due to no violation of Bumble’s rules being found. The user reports frequently contained language related to the gender of the reported user and speculation that the profile might be fake. These new rules now mean that Bumble may take action against those who intentionally submit false or baseless reports solely because of someone’s identity.
The app uses automated safeguards to detect comments and images that go against its guidelines and terms and conditions, which can then be escalated to a human moderator to review. Up to 80% of community guidelines violations on Bumble are now proactively detected before someone reports them, which is part of the company’s commitment to reduce and prevent harm before it happens.
Members of Bumble’s community can also report someone for identity-based hate within the app’s Block + Report tool, either directly from someone’s profile or within their chat conversation.
Bumble is free and widely available in the App Store and Google Play. For more news and updates, visit www.bumble.com
creative wordpress theme