How to get to the root of the social media crisis

Section 230 reform isn’t going to solve our problems.

510
SOURCEIndependent Media Institute

This article was produced by Economy for All, a project of the Independent Media Institute. Leslie Stebbins is an independent research librarian and the director of research4Ed. She is the author of Building Back Truth in an Age of Misinformation (Rowman & Littlefield, 2023).

As a research librarian, my professional life has focused on connecting people to reliable information. In the last thirty years, I have been stunned to watch as the rise in digital information that initially held so much promise in providing people with diverse and trustworthy content has instead spiraled into vast wastelands of clickbait, advertising, misinformation, and toxic content, while at the same time, reliable information is often buried or behind a paywall.

Four years ago, I started looking into the problem of online misinformation and toxic behavior. With support from the Sloan Foundation, I waded through thousands of policy documents and research articles to identify the most promising solutions to our information crisis. When I started on this work, people were excited to hear about it but often threw up their hands in defeat. They said, “You can’t really fix this problem without threatening free speech rights!”

Our attention has remained focused on free speech, but this is not where the answers to our social media crisis lie. We are currently fixated on Section 230 of the U.S. Communications Decency Act of 1996, which designates platform companies as services rather than publishers and gives them legal immunity for most content posted on their platforms: Think phone company, not newspaper.

The problem with revising Section 230 is that if we turn platform companies into publishers and hold them accountable for content they promote, we would start seeing massive amounts of censorship because these companies would err on the side of caution and remove potentially controversial posts. But we need to understand that large social media platforms are not like phone companies or newspapers. They are a different animal altogether.

This term the Supreme Court will decide on two cases—Gonzalez v. Google and Twitter v. Taamneh—that are seeking to broaden the scope of liability under Section 230 for the content platform companies promote. If successful, these cases would jeopardize our right to free speech. The Court will likely hear two other cases from Texas and Florida. These two cases are going after Section 230 from the other side: questioning whether platforms should be allowed to censor political content.

But the focus on Section 230—the issue of free speech—is a red herring.

In my research, I organized the most promising solutions into six areas where we need to move forward. I was also able to pinpoint two “big picture” takeaways. First, possibly the biggest lie being told about misinformation and toxic online content is that the crisis is uncontainable. It is not. Second, our attention has become hyper-focused on fixing our current social media platforms as they are currently designed by using band-aid content moderation strategies while trying to balance free speech rights. But this is the wrong approach. It is the underlying intentional design of these platforms that is causing much of our information crisis. We need to change how our social media platforms are designed to build better, healthier digital public spaces. We need to go after the problem at its roots.

Legal scholars view Facebook, Google, Twitter, TikTok, and a few other companies as controlling the infrastructure of the digital public square. This infrastructure is vital to the flow of information and ideas in our society. Like clean water, access to reliable information should be a human right. New technologies have disrupted the, albeit imperfect, structures that were in place to ensure access to a free press and trustworthy information essential for our democracy and healthy public discourse.

Our current online spaces have contributed to declining trust in institutions and the media, and our access to reliable information is decreasing. Even Google has strayed and now devotes roughly half of its first-page search results to companies that Alphabet, Google’s parent company, owns. Teenagers are turning to TikTok to get information. Hashtags such as #mentalhealth and #anxiety have tallied up tens of billions of views, but the primarily younger audience seeking help is instead exposed to misinformation, bullying, fraud, and a system expertly designed to keep them online.

Changing the design of platforms can move forward on two interconnected fronts. First, regulations need to target the root causes of our information disorder, specifically the design features that are causing harm. The current business model rests on extracting and using personal data for microtargeting individuals and an advertising system that incentivizes and promotes misinformation and vitriol to keep people engaged. This makes billions of dollars for the tech companies, giving them little incentive to change. And second, by requiring these design changes and weakening the financial incentives, we can chip away at the vast concentration of power a few private companies have over our public discourse.

New structural requirements can be prophylactic. We can better serve the public interest by changing the current business model and insisting on using algorithms and tools that are transparent in their designs and open to oversight. The design can shift from promoting content that favors profit-maximizing personalized engagement to designs that promote reliable content and enhance public safety and privacy. We need to strategically design algorithms to counter systemic bias, racism, and inequity that are baked into our data and machine learning systems. In my research, I found that many exciting new tools are already at our disposal that can improve our digital spaces, but the current platform owners have not chosen to use them. They are not looking for solutions that will interfere with their bottom line.

By addressing design issues, we can sidestep infringing on rights to free expression and changes to Section 230 while we create healthier digital spaces. Not a simple task, to be sure. Content moderation will still need to be a part of the process to remove illegal content, such as child sexual abuse material, and incitement to imminent lawless action. Platform companies and nonprofits can be encouraged to experiment and use flexible design practices, but with transparency and oversight. We also will need to create new platforms that can better serve the public interest if current platforms are unwilling to shift their practices. Our democracy is at stake.

FALL FUNDRAISER

If you liked this article, please donate $5 to keep NationofChange online through November.

COMMENTS