Facebook Remains an Election Meddler's Paradise in 2020 Don't rely on Facebook's self-regulation to save us from election interference in this year's critical elections. In fact, despite a PR push to the contrary, the company is doubling down on the access it sells to would-be meddlers.
By Max Eddy Edited by Jessica Thomas
This story originally appeared on PC Mag
I like to start my day with a balance of good and bad: a ritualistically brewed single cup of coffee taken in the quiet comfort of my kitchen while I alternate between despair and anger over the day's news. Over the past few days, I've been reading more about Facebook's decision to not curtail targeted political advertising in 2020, which I can at least enjoy getting angry about, since it's at least security-related bad news.
We, as in the American populace, should probably care about Facebook's policy toward political advertising. In 2016, social media sites like Facebook were ground zero for Russia-led misinformation campaign that marred the US presidential election. However you feel about the election's outcome is immaterial; we know Russia meddled in our election and we know that social media—especially Facebook—was how they did it.
As we barrel toward the 2020 election, Facebook has decided that it won't learn from the past. Instead of trying to rein in political advertising, Facebook is giving some control to its users over what kind of ads they see. Ads will still be targeted, and you'll still see them, but you may be able to tone it down a bit.
Related: A Closer Look at Mark Zuckerberg's 'Next Decade' Manifesto
Truth optional
In addition to putting flimsy guard rails on political advertising, Facebook also announced that it wouldn't pull paid political advertisements that contain false claims. Now, truth in politics is a rare and often subjective thing, but the outright admission that anything goes for Facebook advertising is insulting.
It's also in-line with another recent announcement from the big blue social media company regarding deepfakes. These are, as a reminder to those less terrified of the future than I, phony videos convincingly doctored by artificial intelligence. Facebook's stance on deepfakes is that it will only remove the ones that are intentionally misleading, and the company created huge carve-outs for videos that are "satire." Considering how often Onion articles get circulated as fact, this last point seems particularly problematic.
Defining "truth" is a scary business, but are we really so cynical to not at least say what is blatantly untrue?
Related: What Does Instagram's Ban On Vaping Influencers Mean for Cannabis?
What's the plan?
Speaking of cynicism: one reading of Facebook's decision is that the company is actually trying to ingratiate itself with political parties. Congress has made some attempts to hold Facebook accountable for its failures, and Presidential candidate Elizabeth Warren has even called for big tech companies like Facebook to be broken up. But we can assume that political parties probably like Facebook's targeting tools, and certainly don't mind not being required to tell the truth on the platform. Targeted advertising, particularly on Facebook, has been big business, and is a shockingly cheap and effective tool for misinformation. Facebook's decision not to curtail targeting might shirk its critics, but it keeps them an indispensable tool for the mechanisms that win elections.
In fact, much of Facebook's approach to privacy has followed the same opt-in model. In the past, Facebook might change a feature and users would only later discover that they had to manually opt out of something that had changed. Similarly, Facebook Messenger uses the Signal protocol to secure its off-the-record messages—which even Facebook cannot read—but users are required to switch this feature on every time they wish to use it.
Facebook's approach also stands in opposition to that of its competitors and other technology companies. The music streaming platform Spotify has announced that it won't be running political ads in 2020, and Twitter has similarly stopped paid political advertising. Twitter CEO Jack Dorsey's position that political engagement should be "earned not bought" is honestly refreshing.
Related: TikTok Bans Holocaust Denial
More than technology
Security experts have long known that the best malware in the world isn't as effective as simply calling someone up and asking for their password. Social engineering, in the form of phishing or some other attack, works alarmingly well. Misinformation in an election isn't really that different than phishing: someone spreads bad information for a desired result. That could be handing over a password, or showing up to vote on the wrong day.
Companies have learned that to defeat phishing and similar attacks, you need to equip people with both good information and technology. Two-factor authentication stops many phishing attacks, but training employees to identify suspect links and fake emails is just as important.
We're starting to get the technology right for elections. Audits that use the power of fancy math to accurately gauge the authenticity of an outcome with just a few ballots are a great example, as are systems that allow for fast voting with a verifiable paper trail. That needs to be backed up by efforts to limit the ease and accuracy of targeted advertising, and a good-faith effort to help people identify misinformation. That's the opposite of Facebook's recent announcements.
A sad conclusion
While the 2018 midterm elections showed that it was at least possible to still have democracy in America without a colossal screwup, that's not a guarantee for 2020. In fact, experts have said that there will likely be attacks in future elections, especially now that other countries have seen effective modern election meddling. Social media companies, Facebook included, have been more proactive than in 2016, which makes the company's recent decision all the more disappointing.
The answer to this problem should be regulation in all forms: common-sense laws that put reasonable limits on targeted advertising and self-imposed bans on false information. But I can't help but wonder if that's likely to happen when ads, especially targeted ads, are so useful for campaigns and corporations. Someone, somewhere, needs to act against their personal interests, and it's clear that Facebook isn't willing to actually risk making changes for the better.