
Global Social Media Bans Will Hurt Vulnerable Communities
Anmol Irfan is a Muslim-Pakistani freelance journalist and editor. Her work aims at exploring marginalized narratives in the Global South with a key focus on gender, climate and tech. She tweets @anmolirfan22
In early January, Meta put out a sudden and unexpected announcement that the platform would be ending its third-party fact checking model in the US, saying that their approach to manage content on their platforms had “gone too far.” Instead, Meta will now be moving to a Community Notes model written by users of the platform, similar to X. These changes came amidst other larger changes to the platform’s hate speech and censorship rules which will be applied globally, with the announcement stating that the platform will be “getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.”

But while the announcement focused on the idea of promoting “free speech”, critics pointed out that it didn’t actually detail just how those changes would take place. News outlets like NPR reported that Meta now allows users to call gay and trans people “mentally ill” and refer to women as “household objects and property.” Those are just some of the more obvious changes in a larger shifting power dynamic that over the last year has slowly made it clear that the digital realm is increasingly unsafe. With the monopoly of digital communication and connection in the hands of a few Big Tech platforms, these US based companies like X and Meta have enough power and access across the world to not just impact everyday communication but influence social dynamics and even global politics. Facebook’s facilitation of the Rohingya genocide isn’t new news, but it is an example of how the safeguards these platforms have supposedly had in place for years haven’t been working, and these changes may seek to worsen the situation further particularly for vulnerable groups.
Is Social Media Becoming More Dangerous?
Across the United States and the world digital spaces already unsafe for many marginalized groups are predicted to become more exclusionary, and even dangerous in many ways.
“When people talk about tech policies, when they talk about vulnerable communities they have a very narrow perspective of the US based minority,” says attorney Ari Cohn, who works at the intersection of speech and technology. That excludes the culturally-nuanced and global conversation that is needed to safeguard global vulnerable populations.
With fewer fact checkers – even in just the US – and lesser controls online, these platforms are creating digital spaces that now account even less for cultural nuances and needs than they did before, which can further endanger people in the Global South. Because these decisions are made by tech company leadership in the US, many vulnerable groups across the world aren’t even factored into the conversation about safety or risk
“With the tech landscape generally the regular terms we acknowledge or are worried about are non consensual sexual or intimate images, but the definition of intimate is something we need to work around, so for example if we see a picture of a couple is leaked from Pakistan, to Meta it’s just a picture of people holding hands but for us the context will make it different, put those people at risk”, says Wardah Iftikhar, Project Manager at SHE LEADS, which focuses on eliminating online gender based violence.
It’s these cultural nuances and the risks posed to marginalized groups that make it essential to understand just what this push for “free speech” really means. Yael Eisenstat, an American democracy activist and technology policy expert, summarizes the three changes that she says help us understand that these directives aren’t about free speech and risk contributing to more hate and extremism, pointing out that 1, the algorithm on platforms like X favors Elon Musk and the people he prioritizes, 2, previously banned users being let onto the platform, and 3, the new verification systems now prioritizing people who can pay which further skews the power into the hands of people who have money.
“These changes combined are important because they are the opposite of actually trying to foster free open speech and tilting it towards people willing to pay, or people the owner is willing to prioritize, while at the same time making it clear that they don’t want to while at the same time making it clear that they no longer want to engage with civil society and outside experts,” Eisenstat shares, emphasizing how this disparity increases further in the global south in countries where X/formerly Twitter’s $8 verification fee could mean a significant amount for many people.
The risk of false, and possibly dangerous information further increases with the move away from fact checking. “If there were a fair community notes system I could see that this could be a better solution than the fact checking, but you have to take it into account that all or most of the community notes in the past which countered a claim, referred mostly to these fact checker organizations and their articles which were paid by meta, and now they’re gone,” says Berlin-based writer and lecturer Michael Seeman whose work focuses on the issues of digital capitalism.
It also further silos users within their own information bubbles online, which can lead to radicalization as well, particularly as Eisenstat points out that in the case of X many of those allowed back on the platform were extremists and white supremacists. Iftikhar, says that social media platforms have the power to let us remain in our silos.
“For people supporting Palestine they thought everyone was supporting Palestine and people supporting Israel thought everyone was supporting Israel and people in Palestine were being offensive,” she says.
Big Tech & Global Autocracy
Of course there is the actual shadowbanning on pro-Palestinian that took place across many of Meta’s platforms, which in the larger picture also raises questions about what the future of these platforms’ relationships with global governments will look like – particularly those governments that want to exercise control over their citizens.
Dr Courtney Radsch, a journalist, scholar and advocate focused on the intersection of technology, media, and rights points out that we’re already seeing the ripple effects of these policies globally through the de-amplification of journalists and Meta’s news ban in Canada.
“This leads to an increase of harassment of people using these services especially people who are already marginalized, it has led to a rise in extremist and right wing populism being expressed on these platforms around the world and led to what many see as a rise of degradation on these platforms due to a rise of what many see as AI generated crap that flourishes on these platforms,” Radsch shares.
The monopoly of these platforms over communications also means that governments only need to ban access to one or two platforms to completely silence any dissenting voices or citizen-led communication, and as is clear from Meta’s catering to Trump, they could just as easily cater to the demands of other governments as well.
“They no longer put a strong emphasis on filtering out the mis- and disinformation so it’s easy for autocracies to use platforms as a channel to augment their voice and send their message across the board,” says Xiaomeng Lu, director of Geo-technology at Eurasia Group.
Decentralising Control
However Eisenstat doesn’t believe that misinformation should be made illegal.
“The questions I think are more important is not how should these companies moderate misinformation but what is it about their design and structures where misinformation and salacious content is being amplified more than fact based information,” she says.
It’s important to be raising the right questions around tech policy and cutting through the noise these platforms are creating in order to be able to come up with long term solutions that can create a more decentralized control around digital spaces. Radsch also believes that there shouldn’t be content focused regulations.
“There will always be propaganda, there has been throughout history, and platforms monetize this, they monetize engagement. Polarization and extremism do well, and the issue is less about a piece of misinformation and more about industry operations that have risen because it’s so profitable and because algorithms designed in a way to make platform money,” she says.
Cohn also points out that too much regulation may also have its own issues. “There is room to worry about to whether there’s too much centralized power about what is fact,” he says, adding “I think the answer lies somewhere else, in decentralization, like the AT protocol that Bluesky operates on , when people have the easy ability to build a network that taps into a protocol that a lot of other people are using, it becomes a lot more difficult to tap into that or control that.”
Radsch further believes that the domination of these platforms needs to be broken up, and also needs to be seen in line with the rise of AI dominance, which she says cannot be separated from what we’re seeing in terms of social media platforms consolidating power.
The answers to curbing power from platforms that have grown so big, and have so much control over the globe aren’t easy – and as authoritarianism rises across the world they may only seek to get more difficult. But the first step can come from changing the way we are asking the questions in the first place, and start questioning what drives these platforms instead of only questioning the content.
