Rethinking Facebook’s News Feed Problems

Note: This post and all affiliated content are part of AEI’s Online Speech Project. To learn more about the project, click here.

Like you, I use multiple means for receiving and sharing information. In the past week, I’ve used my computers, smartphone, other screens, and old-fashioned in-person human interaction to communicate with tech and energy consultants, business executives, farmers, elected officials, scholars, and even artificial intelligence robots. I live in these multiple channels to have rich relationships, learn from others, and share what I think I am learning. You live this way too.

But Facebook—which aspires “to give people the power to build community and bring the world closer together”—largely relies on a single instrument, its news feed, “to show people the stories they care about most, every time they visit.” That is an impossible task for a single portal, as the ways you and I live our lives demonstrate. And that constitutes a challenge for how Facebook uses its news feed.

via Reuters

A recent Wall Street Journal investigation highlights Facebook’s news feed paradigm problem. According to the newspaper, after the January 6 Capitol riot, Facebook “wanted to scale back how much political content it showed users.” At first, it “overhauled how it promoted political and health-related content” by demoting “posts on ‘sensitive’ topics as much as possible in the newsfeed that greets users when they open the app.”

But this had unintended consequences. The approach decreased the exposure of what Facebook deemed to be high-quality news publishers. This increased the exposure of content that users trusted less. Users noticed and interpreted this change as growing misinformation.

The resulting declining trust in Facebook became a real problem for its parent company, Meta, which wants it to be the place where people build community. So Meta pivoted and altered Facebook’s news feed algorithms to emphasize what people say they want to see rather than what people spend time with.

But this approach also came at a cost. People’s negative visceral reactions to controversial political content drive them to spend more time on Facebook. And negative content leads people to engage with Facebook longer than thoughtful and positive content. Thus, lowering the visibility of controversial and negative content caused people to spend less time on Facebook, which harmed the bottom line of a company whose advertising-based business depends on holding people’s attention.

Facebook’s challenge isn’t to develop better algorithms but to decrease its overreliance on the news feed itself. The news feed is, in effect, the single window that Meta wants each user to look through to see his or her community. However, each of us has multiple communities in life, and we prefer to engage with each differently and at different times. Facebook’s news feed becomes a low common denominator across our different needs and preferences. That business model leaves a lot of gaps in service—and money on the table—for rivals that are willing to provide users with multiple windows, each of which is adaptable to the user’s community needs.

What might a multi-windowed Facebook look like? It could be multiple Facebook interfaces that tap a common database, but in different ways according to user preferences. Or it could be a common interface with multiple options for what news feeds shows.

To check my thinking, I asked OpenAI’s Generative Pre-Trained Transformer chatbot (ChatGPT) to describe Facebook’s three biggest problems. ChatGPT is interesting because it represents a form of internet consensus due to the way its algorithms match inquiries with knowledge compiled from the internet at large. ChatGPT’s answer? Facebook’s three biggest problems are data privacy and security, fake news, and misuse of Facebook’s platform—namely, political manipulation, hate speech, and inciting violence.

Two of these three problems are created in part by Facebook’s reliance on a single news feed. For example, fake news is a problem not because actual news is fake—although sometimes it is—but because overconfidence and other biases lead people to see some accurate news as fake. People’s readiness to consider alternatives to their current beliefs varies over time, making a single portal constantly suboptimal.

It would be hard for Facebook to step away from its news feed paradigm. The approach has been financially successful, and my sense is that social media people in content moderation are often confident that they are better than users at sorting fact from fiction and sincerity from manipulation. But if Meta is to solve its content moderation challenges, it is going to have to give users more options.

The post Rethinking Facebook’s News Feed Problems appeared first on American Enterprise Institute – AEI.