Technology

Meta checked out posts about gay relationships, proving that the bottom line is wrong

In April, Meta be quiet and answer for yourself after releasing a post on Instagram honoring the old gender relations in Brazil. The removed posts were not sexual in nature and did not contain material that was harmful to children. The post in question recorded a snapshot of a historical moment when gay women were forced to hide their relationships as “roommates” or “couple friends” and their love was erased from the public record. However, Meta removed the content.

Meta cited its hate speech rules. Lookup Board later admitted what should have been obvious from the beginning: The Brazilian case was an example of the overuse of an underestimated community, driven by automated systems that could not read the context, the returned language, or even the full post itself. The content was restored only after external intervention and advocacy from the LGBTQ+ community.

Mashable 101 Favorites: Vote for your favorite creator today!

This case is now considered a minor content moderation error, but policymakers need to be aware that it shows a clear warning about what happens when lawmakers push platforms to police content instead of fixing design. Across the country, states are scrambling to “protect children online” by restricting access to social media or pressuring companies to remove ill-defined “harmful” content. But what happened in Brazil shows the human cost of that approach.

If forums are incentivized to remove speech quickly and moderately, they are not better judges of nuance. Social media is becoming a blunt tool, and the first people affected are those whose stories require human context and great empathy to understand.

If lawmakers really want to protect children, they should stop asking platforms to decide what content is acceptable and start regulating the choices of the primary designs that cause harm in the first place, such as endless scrolling, engagement-based recommendations, and surveillance-driven feeds.

BREAKFUT:

I had a sugar daddy from Grindr for one day. He then tried to get a refund.

That’s why that difference is important, especially for LGBTQ+ children and other underserved communities, such as neurodivergent children. LGBTQ+ young people are more likely than their peers to rely on online spaces to find communityinformation, and support, often because those things are unavailable or unsafe at home or school. But they are also important it is very possible to end up engaging in unsafe online interactions: harassment, grooming, doxxing, or being pushed into high-risk areas they didn’t want.

In Australia, after ban on social media for anyone under the age of 16 was struck, disability rights advocates noted that Autistic youth were cut off from the only other support and peer networks available to them.

Recommender systems don’t understand vulnerability, but they do understand interaction. When a silly child searches for a community, platforms often respond by promoting whatever keeps them clicking. Usually, this means increasingly sexual content, adult strangers, extremist propaganda, or offensive accounts that know how to use it to alienate themselves.

Endless scrolling makes disengagement more difficult for young people, according to the Electronic Privacy Information Centereven more so for those in vulnerable communities. Algorithmic “friend” or “account” suggestions blur the boundaries between youth and adults. Weak defaults make it difficult to block, mute, or disappear.

Young people, not just young LGBTQ+ people, are exposed to harm online because platforms are designed to attract attention, not protect users. Parents have a right to be concerned and fight for change. But the content-based framework addresses the real problem.

The biggest dangers kids face online don’t come from one bad post that goes beyond moderation, but from automated systems that push content to kids who didn’t ask for it, connect them to people they don’t know, and keep them scrolling long after the warning signs appear.

Policymakers at both the federal and state levels need to design regulations that directly address those risks. Age-appropriate design codes don’t tell platforms what speech to allow, but they can tell platforms how to behave. Design codes require safe automation, such as limits on behavioral profiling, robust prevention tools, reduced amplification of unsolicited recommendations, and monitoring lines that enable virus delays and mandatory use.

Society should promote product refinement, rather than violation of First and Fourth Amendment rights. Design codes minimize the chance of a curious or lonely child being systematically compromised, as I was, seeking community and being compromised by systems that didn’t care who I was.

Age-appropriate design codes offer a way out of this mess. By controlling how platforms are built rather than what people are allowed to say, design code rules reduce harm without turning companies into cultural censors. They don’t need platforms to interpret repeated insults, strange history, or political rhetoric. Companies should instead be required to stop being addicted to engineering and risk.

We don’t need more content or platform bans. We need fewer dangerous systems. If we are serious about protecting children online, especially those who are already very vulnerable, this situation reminds us where to start.

This article expresses the author’s opinion.

Lennon Torres is the Director of Movement at the Heat Initiative and Co-Founder of Attention Studio.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button