By Donna Lu
Facebook has announced plans to reduce misinformation and foreign interference ahead of the UK general election. But its efforts are unlikely to make much of a difference.
On 12 December, voters across the UK will go to the polls in the country’s first pre-Christmas election in nearly a century. As the fallout from the 2016 Brexit referendum continues, misinformation and electoral interference will be front of mind.
“We will set up a dedicated operations centre to bring together the teams who monitor activity across our platforms,” wrote Richard Allan, Facebook’s vice-president of policy solutions, in The Telegraph before the December date was announced this week.
Facebook’s measures include removing fake accounts and reducing the reach of articles that have been debunked by independent third-party fact checkers. The firm announced a similar approach for the 2020 US presidential election.
But the effectiveness of these measures is dwarfed by changes the firm quietly made to its advertising policies in early October. Facebook previously prohibited any advertising that contained “deceptive, false, or misleading content”. Now the ban is only for “ads that include claims debunked by third-party fact checkers”.
This is a problem, for two reasons: Facebook’s two UK fact-checking partners have limited resources to monitor the more than 1 billion pieces of content posted to the platform daily. Full Fact, the larger of the two partners, has a team that comprises fewer than 10 people.
The second is that under Facebook’s rules, ads from politicians or political parties aren’t even eligible for fact checking in the first place.
“We do not believe it should be our role to fact check or judge the veracity of what politicians say,” wrote Allan, who is also a member of the UK’s House of Lords.
The rationale, according to a Facebook fact sheet, is grounded in the firm’s “fundamental belief in free expression” and a “respect for the democratic process”. But more than 250 of Facebook’s own staff have openly challenged the position.
Did You See This CB Softwares?
37 SOFTWARE TOOLS... FOR $27!?Join Affiliate Bots Right Away
“[The policy] doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy,” they wrote in an open letter.
Facebook founder Mark Zuckerberg was grilled last week by US member of congress Alexandria Ocasio-Cortez, and didn’t give a direct answer as to whether demonstrably false ads would be removed from the platform.
Twitter, meanwhile, has announced that it is banning all political advertising worldwide. Ads that “advocate for or against legislative issues of national importance”, such as climate change, healthcare and national security, will be prohibited, said Vijaya Gadde at Twitter.
Beyond misinformation, there is also the issue of what personal data is being used to target political advertising at potential voters.
Facebook and other social media companies need to improve their transparency around the use of data for advertising, says Ailidh Callander at Privacy International, a data privacy charity.
Earlier this month, a Privacy International report found that Facebook has increased transparency for political ads in 35 countries.
“You have 80 per cent of the world essentially where they’re not making any effort whatsoever,” says Callander. This includes no requirement for political advertisers to become authorised, for political ads to carry disclosures or for ads to be saved in a public archive.
Even in countries with heightened transparency, like the UK, it is still difficult for an individual to access information about why they are seeing particular advertisements.
This week, Facebook agreed to pay a penalty of £500,000 to the Information Commissioner’s Office, the UK data watchdog, relating to the Cambridge Analytica scandal. The firm dropped its appeal of the fine, but did not admit fault over data misuse.
The data privacy issue extends beyond Facebook. In the UK, under a specific provision of the Data Protection Act, registered political parties can use personal data revealing political opinions as part of their campaigning activities, says Callander.
“They don’t need to go and get explicit consent,” she says. “It’s open to abuse and it’s a condition that political parties rely on fairly heavily.”
More on these topics: