Photo by Annie Spratt on Unsplash
Since the news of Cambridge Analytica’s involvement in 2016 US Presidential Elections and later its involvement in Britain’s VoteLeave campaign broke in March 2018, Facebook has been at the centre of subsequent hearings across the global north. Since then, Facebook has announced new measures to control the involvement of its user data in election meddling. In October 2019, Facebook announced a new tool to curb false information on its platform through which the plan was to “help protect the democratic process”.
Facebook’s blogpost mentions that it aims to curb foreign interference by fighting inauthentic behaviour, and providing increased and updated security features as part of its new program called Facebook Protect. The program is geared towards politicians and their teams to secure their accounts from being compromised. Facebook also makes a commitment towards increased transparency of political ad campaigns, by making public details about page ownership and political advertising revenues being used by candidates and supporting pages. Additionally, there has been a particular focus on combating misinformation throughout the array of apps owned by the social media giant.
On one of the most prominent factors affecting election integrity, misinformation, Facebook commits to “keep confirmed misinformation from spreading” by simply reducing its reach and in some cases labelling it as false information. However, Facebook’s apparent commitment to contain political misinformation from spreading doesn’t extend to one of the most potent mediums of political communication – the campaign advertisements. When it comes to advertisements, particularly ads being distributed online as a part of election campaigning, Facebook has categorically refused to engage in fact-checking, terming it “political speech” and claimed that “people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinised and debated in public”.
Political ads and profit
Following the disastrous and obvious impact of politically motivated misinformation in the 2016 US elections and ensuring debate across the world, Twitter announced complete banning on all political ads on its website. Google announced rigorous limitations on microtargeting of political ads on all of its platforms. However, Facebook, the corporation whose data was breached in the most prominent manner in the Cambridge Analytica scandal, doesn’t only continue to allow political ads on its platform, but plans to allow them to run without any fact checking, inspection or regulation. Spreading misleading and damaging information about the opposing candidates was one of the salient features of the political campaigns during the 2016 US elections. The company has taken the so-called hands-off approach and wants voters to decide whether they want to believe the content they see or choose to fact-check before voting. And although it does this in the name of upholding democratic principles and not being the gatekeeper of content, the conflict of interests is difficult to ignore: regulating political advertisements would mean a potential revenue loss from a very lucrative income stream.
It’s agreeable that individuals trust any piece of information based on their own confirmation biases, it’s also true that Facebook’s algorithm has a strong tendency of creating echo chambers where users are rarely subjected to the kind of content they have not interacted with before. The ability of Facebook to allow ad managers to sophisticatedly target audience based on their psychographics in addition to the usual demographics makes the platform not only a favourite for campaigners, but has also reflected effectiveness in achieving desired outcomes.
In a blogpost from January 2020, Facebook says, “we don’t think decisions about political ads should be made by private companies.” However, what is conveniently ignored in this statement is the fact that Facebook is not just another private company anymore; it is in fact a private company with significant power, that is enough to influence and even derail democratic processes. Entities with vested political interests and millions to spend use the platform for their own benefit and Facebook benefits from it.
The facade of ‘voice’
In a call with the press in October 2019, Facebook’s CEO Mark Zuckerberg said, “I believe that giving people a voice is important, and ads can be an important part of voice.” However, in the name of giving people a choice, the platform will in effect enable entities with defined political interests, to perpetuate the misleading information, targeting those who are most vulnerable to it, while also categorically denying to take any responsibility for creating an environment where political manipulation through disinformation becomes possible. Additionally, the idea of having a ‘voice’ on Facebook is complicated.
Fact remains that the difference in reach of sponsored and non sponsored content is exponential. In 2018, Facebook changed its News Feed algorithm to declutter the feed, show more of the posts from friends and family and less from the pages and publishers on the platform, resulting in considerable decrease in organic reach of posts from publishers. So publishers, who spend to buy more ads, get more eyes and the only thing misinformation needs to travel is a reasonably priced ‘boost’, allowing advertisers to reach targeted audience, who are seeing barely any published political commentary in an organic manner. Thus, the idea of a democratic platform where all political sides have a voice, often remains a facade.
Additionally, Facebooks’ treatment of issues like Kashmir and Iranian General Qassem Soleiman has often raised questions about the corporation’s claims of being politically unbiased. Together, the politically biased policies, possibility of targeted misinformation on the platform and general lack of digital media and information literacy create a potent mix that can have a direct and negative impact on democracy and democratic process.
TL;DR Facebook should fact-check political ads. Setting up policies and then selectively implementing them, serves absolutely nothing in addressing the problems perpetuated from the platform itself.
Hija is a Programs Manager at Media Matters for Democracy. She combines her experience in digital rights in Pakistan to lead digital rights and internet governance advocacy of MMfD. She tweets at @hijakamran