01/19/2023 (Thu) 18:47
Facebook's moderator notes that some of the above claims "would already be violating" — an implicit admission that the CDC's opinion on the other claims would be a deciding factor in whether the platform would restrict such content. Facebook was clearly a willing participant in this process; moderators repeatedly thanked the CDC for its "help in debunking."
Claims vetted by the CDC included whether "COVID-19 is man-made." The CDC told Facebook that it was "theoretically possible, but extremely unlikely."
For months, it was Meta policy to prohibit users from asserting that the pandemic may have originated from a lab leak. The platform revised this policy around the same time that the above email exchange took place.
By July 2021, the CDC wasn't just evaluating which claims it thought were false, but whether they could "cause harm."
Then, in November, the Food and Drug Administration granted emergency authorization for children to receive Pfizer's COVID-19 vaccine. Meta proudly informed the CDC that it would remove false claims—"i.e. the COVID vaccine is not safe for kids"—from Facebook and Instagram. Meta also provided the CDC with a list of new claims about vaccines and asked whether the government thought they could "contribute to vaccine refusals."
The CDC determined that this label applied to all such claims.
It's important to consider the ramifications. Meta gave the CDC de facto power to police COVID-19 misinformation on the platforms; the CDC took the position that essentially any erroneous claim could contribute to vaccine hesitancy and cause social harm. This was a recipe for a vast silencing across Facebook and Instagram, at the federal government's implicit behest.