Russia is the king of disinformation on Facebook, the company says

Russia is the king of disinformation on Facebook, the company says

Facebook’s report, published Wednesday, shows how foreign and domestic covert influence operators have shifted their tactics and grown more sophisticated in response to efforts by social media companies to crack down on fake accounts and influence operations.

Facebook has removed more than 150 networks of coordinated fake activity since 2017, the report said. Twenty-seven networks have been linked to Russia, and 23 to Iran. Nine originated within the United States.

The US remains the primary target for foreign influence campaigns, Facebook’s report said, highlighting 26 such efforts by a variety of sources from 2017 to 2020. (Ukraine follows as a distant second.)

However, during the 2020 election season, it was US domestic actors, not foreign operatives, who were increasingly responsible for sowing disinformation. In the run-up to the election, Facebook removed as many American networks targeting the US with so-called coordinated inauthentic behavior (CIB) as it did Russian or Iranian networks, the company’s report said.

“Most notably, one of the CIB networks we found was operated by Rally Forge, a US-based marketing firm, working on behalf of its clients including the Political Action Committee Turning Point USA,” the report said. “This campaign leveraged authentic communities and recruited a staff of teenagers to run fake and duplicate accounts posing as unaffiliated voters to comment on news Pages and Pages of political actors.”

That campaign was first reported by The Washington Post in September 2020. In a statement to the Post at the time, a Turning Point spokesman described the effort as “sincere political activism conducted by real people who passionately hold the beliefs they describe online, not an anonymous troll farm in Russia.” The group at the time declined to comment in response to a request from CNN.
Another US network, which Facebook announced it removed in July 2020, had ties to Roger Stone, a friend and political adviser to former President Donald Trump. The network maintained more than 50 accounts, 50 pages and four Instagram accounts. It had a reach that covered 260,000 Facebook accounts and more than 60,000 Instagram accounts. (After Facebook’s takedown, Stone shared news of his banning on the alternative social media site Parler, along with a statement: “We have been exposing the railroad job that was so deep and so obvious during my trial, which is why they must silence me. As they will soon learn, I cannot and will not be silenced.”)
The presence of fake and misleading content on social media became the dominant story dogging tech platforms including Facebook, Twitter and YouTube following the 2016 election, as revelations surfaced about Russia’s attempts to meddle in the US democratic process. By posing as US voters, targeting voters with misleading digital advertisements, creating false news stories and other techniques, foreign influence campaigns have sought to sow division within the electorate.

The discovery of those campaigns has led to intense political and regulatory pressure on Big Tech and also raised persistent questions about the industry’s disproportionate power in politics and the wider economy. Many critics have since called for the breakup of large tech companies and legislation governing how social media platforms moderate the content on their websites.

Tech companies such as Facebook have responded by hiring more content moderators and establishing new platform policies on fake activity.

In a separate announcement Wednesday, Facebook said it is expanding the penalties it applies to individual Facebook users who repeatedly share misinformation debunked by its fact-checking partners. Currently, when a user shares a post that contains debunked claims, Facebook’s algorithms demote that post in its news feed, making it less visible to other users. But under Wednesday’s change, repeat offenders may risk having all of their posts demoted going forward.

Facebook had already been applying blanket account-level demotions to pages and groups that repeatedly share fact-checked misinformation, it said, but Wednesday’s announcement covers individual users for the first time. (Politicians’ accounts are not covered by the change because political figures are exempt from Facebook’s fact-checking program.)

But even as Facebook has improved its moderation efforts, many covert purveyors of misinformation have evolved their tactics, the report said. From creating more tailored and targeted campaigns that can evade detection to outsourcing their campaigns to third parties, threat actors are trying to adapt to Facebook’s enforcement in an ever more complex game of cat-and-mouse, according to the company.

“So when you put four years’ worth of covert influence ops together, what are the trends?” Ben Nimmo, a co-author of the report, wrote on Twitter Wednesday. “More operators are trying, but more operators are also getting caught. The challenge is to keep on advancing to stay ahead and catch them.”



Check Also

Canadian school boards sue social media giants over effects on students

Canadian school boards sue social media giants over effects on students

Snapchat, TikTok, Facebook and Instagram are addictive and have ‘rewired’ the way children learn, educators …

Leave a Reply