Companies Must Provide Accurate and Transparent Information to Users


Companies Must Provide Accurate and Transparent Information to Users

This is the third installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.

Imagine sharing information about reproductive health care on social media and receiving a message that your content has been removed for violating a policy intended to curb online extremism. That's exactly what happened to one person using Instagram who shared her story with our Stop Censoring Abortion project.

Meta's rules for "Dangerous Organizations and Individuals" (DOI) were supposed to be narrow: a way to prevent the platform from being used by terrorist groups, organized crime, and those engaged in violent or criminal activity. But over the years, we've seen these rules applied in far broader -- and more troubling -- ways, with little transparency and significant impact on marginalized voices.

EFF has long warned that the DOI policy is opaque, inconsistently enforced, and prone to overreach. The policy has been critiqued by others for its opacity and propensity to disproportionately censor marginalized groups.

Meta has since added examples and clarifications in its Transparency Center to this and other policies, but their implementation still leaves users in the dark about what's allowed and what isn't.

The case we received illustrates just how harmful this lack of clarity can be. Samantha Shoemaker, an individual sharing information about abortion care, shared straightforward, facts about accessing abortion pills. Her posts included:

Instead of allowing her to facilitate informed discussion, Instagram flagged some of her posts under its "Prescription Drugs" policy, while others were removed under the DOI policy -- the same set of rules meant to stop violent extremism from being shared.

We recognize that moderation systems -- both human and automated -- will make mistakes. But when Meta equates medically accurate, harm-reducing information about abortion with "dangerous organizations," it underscores a deeper problem: the blunt tools of content moderation disproportionately silence speech that is lawful, important, and often life-saving.

At a time when access to abortion information is already under political attack in the United States and around the world, platforms must be especially careful not to compound the harm. This incident shows how overly broad rules and opaque enforcement can erase valuable speech and disempower users who most need access to knowledge.

And when content does violate the rules, it's important that users are provided with accurate information as to why. An individual sharing information about health care will undoubtedly be confused or upset by being told that they have violated a policy meant to curb violent extremism. Moderating content responsibly means offering the greatest transparency and clarity to users as possible. As outlined in the Santa Clara Principles on Transparency and Accountability in Content Moderation, users should be able to readily understand:

If you find your content removed under Meta's policies, you do have options:

Abortion is health care. Sharing information about it is not dangerous -- it's necessary. Meta should allow users to share vital information about reproductive care. The company must also ensure that users are provided with clear information about how their policies are being applied and how to appeal seemingly wrongful decisions.

This is the third post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion

Previous articleNext article

POPULAR CATEGORY

corporate

14923

entertainment

18164

research

9007

misc

17932

wellness

14943

athletics

19318