The Oversight Board has issued its response to Meta’s request to help it reevaluate its COVID-19 misinformation rules. The social network first asked the Oversight to weigh in on the rules last summer, noting that it policies had resulted in an “unprecedented” number of removed posts but that the pandemic had “evolved” considerably since the rules were first put in place.
In its policy advisory opinion, the Oversight Board said that Meta should continue to remove false claims about the pandemic that are “likely to directly contribute to the risk of imminent and significant physical harm,” but that it should "reassess" the specific claims that qualify for removal. The board also recommended that Meta make it easier for external researchers to study misinformation, and study how its own algorithms could contribute to the spread of harmful misinformation.
When Meta first asked the Oversight Board about its COVID-19 misinformation rules, many speculated that the company was looking to soften its stance. Prior to the pandemic, the company rarely removed users' posts on the basis of misinformation. Instead, it relied on fact checkers to evaluate such questionable content, and information rated as false was down-ranked to make it less visible.
But at the start of the pandemic, Meta said it would remove misinformation that health experts said was likely to lead to harm. The result, as the Oversight Board notes, is that 27 million posts were removed from Facebook and Instagram between March 2020 and July 2022. The company currently lists 80 specific claims that qualify for removal, including allegations that COVID vaccines cause magnetism and that the pandemic is linked to 5G technology.
In its advisory, the Oversight Board said that as long as the World health Organization designates COVID-19 as a global health emergency, Meta should continue to remove the most harmful misinformation. But it notes that the company has not consulted with public health officials or other experts to assess whether all the claims it removes continue to pose a serious threat.
“Should Meta find that any claims are no longer false or no longer ‘likely to directly contribute to the risk of imminent physical harm, such claims should no longer be subject to removal under this policy,” the Board writes. And while the Oversight Board didn’t attempt to weigh in on any of the specific claims, it did say that Meta should consult with a range of exerts, including those versed in virology, disinformation, human rights and “freedom of expression."
Notably, the Oversight Board is also, once again, pushing Meta to examine its own role in helping misinformation spread. The board recommended the company “commission a human rights impact assessment of how Meta’s newsfeed, recommendation algorithms, and other design features amplify harmful health misinformation and its impacts.” The Oversight Board made a similar recommendation in the wake of January 6th, saying that Meta should look at how its own decisions contributed to the insurrection, but the company declined to commit to new research.
The Oversight Board touched on another thorny topic for Meta: the ability of external researchers to study what happens on Facebook, which is particularly consequential for misinformation research. The board pointed to reports that Meta is getting ready to disband CrowdTangle, the analytics tool used by researchers and journalists, and said that “research tools should be strengthened rather than discontinued."
“Meta should institute a pathway for external researchers to gain access to non-public data to independently study the effects of policy interventions related to the removal and reduced distribution of COVID-19 misinformation, while ensuring these pathways protect the right to privacy of Meta’s users and the human rights of people on and off the platform,” the board wrote.
As with all Oversight Board recommendations, Meta isn’t obligated to change any of its policies, but it is required to respond to each recommendation within 60 days.