In the time leading up to the Jan. 6 Capitol “insurrection,” engineers and other experts in Facebook’s Elections Operations Center were throwing tool after tool at supposed false claims to squash “misinformation.”
Data captured the following morning, Jan. 7, found that Facebook’s artificial intelligence tools had struggled to address a large portion of the content related to the storming of the Capitol.
Facebook took down the original Stop the Steal group in November, but it did not ban content using that phrase until after the Jan. 6 riot. The company said it has since been expanding its work to address threats from groups of authentic accounts, but that one big challenge is distinguishing between users coordinating harm on the platform from people organizing for social change.
Both the inability to firm up policies for borderline content and the lack of plans around coordinated but authentic misinformation campaigns reflect Facebook’s reluctance to work through issues until they are already major problems, according to employees and internal documents.
According to the report, “Some Facebook staffers argue that what is, in fact, a reactive approach from the company sets Facebook up for failure during high-stakes moments like the events of Jan. 6.”
Now, questions still are swirling regarding Zuckerberg’s Facebook and its contribution to fueling the riot at the Capitol on January 6.