Rewriting the Rules: Concerns Surrounding Meta’s Shift to Community Notes

On January 7th, 2025, Meta, the owners of Facebook, Instagram, WhatsApp, and Threads, announced that they would end their third-party fact-checking program. Instead, they will replace it with a community-based program called Community Notes, similar to the one already used by X (formerly Twitter). While Meta has justified this change as a way to get back to their roots and align more with the company’s values, it raises concerns about potential issues under Section 230 of the Communications Decency Act.
Meta’s fact-checking program was instituted to identify misinformation on their social media platforms, allowing for notice and additional context to be added to content when a post is labeled as misinformation. Meta often relied on technology to flag misinformation but also on a dedicated team of fact-checkers to identify content to review on their own and determine its accuracy. This process took place independently from Meta and included a lengthy verification process, which often entailed verifying sources, reviewing public data, and authenticating media. Once the content was labeled as misinformation, the number of people who could view it was reduced, with the overall goal of decreasing the number of people being exposed to misinformation.
Despite lacking a formal system for identifying misinformation, Meta is protected from liability for third-party content under Section 230 of the Communications Decency Act.
Now, Meta has repositioned itself in light of efforts to reduce content restrictions and political censorship. This change arises out of a need to reduce mistakes made in content moderation but also largely due to the recent political shift under the new presidential administration and the belief that the fact-checkers have been too “politically biased.” The Community Notes feature now leaves the determination of content accuracy to those who use the social media platform, allowing decisions on content to be “written and rated by contributing users.” This means that content containing misinformation on Instagram or Facebook will only be flagged as misinformation if the contributing users on the platform choose to flag it as such. Meta themselves even acknowledged the risk involved in this change, saying that they are going to catch less “bad stuff” but there will be a reduction in the amount of “innocent people” who will have their posts taken down.
However, despite lacking a formal system for identifying misinformation, Meta is protected from liability for third-party content under Section 230 of the Communications Decency Act. Section 230 allows internet platforms acting as a “provider or user of an interactive computer service” to be free of liability for user content. This means that as long as Meta only displays content created entirely by third parties, an individual cannot successfully sue them, even if a post or comment on their platform is highly misinformative or destructive. However, immunity under Section 230 only applies if the service provider is not also an “information content provider.” As an information content provider, Meta would have to act “responsible, in whole or in part, for the creation or development” of the misinformative information or content that’s on their platforms.
To some, Meta’s social media platforms may appear as spaces where individuals can create and share content without interference. However, others may view Meta’s increasing control over what appears on its platforms as an indication that the company is, in fact, responsible for the information being presented to users. Ultimately, the key factor in determining the application of Section 230’s immunity is whether Meta actually develops the content. While Meta is taking a more active role in the content that is being shown on their platforms, it is still not directly responsible for the creation of the information posted by users.
This change will likely lead to increased misinformation across Meta’s social media platforms. Fact-checking was implemented to verify the accuracy of information and to make sure that truth was upheld on these platforms by relying on professionals to assess content. With the introduction of Community Notes, not all information will necessarily be misinformative, but it raises concerns about how challenging it might be for users to trust the content they see. This change could lead to an increase of legal challenges for Meta, but as long as Meta continues to operate as a service provider rather than an information content provider, it will likely remain immune from such liability.
Yasmeen Halabi
Yasmeen attended the University of Central Florida for college, where she majored in Clinical Psychology and minored in Cognitive Sciences. In law school, Yasmeen serves as a Dean’s Fellow, Treasurer of the Asian American Law Students Association, and Director of Pro Bono and Community Service for Women in Law.