Only human editors can fight fake news, not bots

Top Stories

Facebook is planning on a stricter scrutiny of advertisements before they are published.
Facebook is planning on a stricter scrutiny of advertisements before they are published.

Social media firms are driven by data-first mentality as it profits them, but what about ethical issues?

By Laurent Belsie

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Published: Mon 9 Oct 2017, 9:00 PM

Last updated: Mon 9 Oct 2017, 11:59 PM

The day after the 2016 presidential election, Facebook CEO Mark Zuckerberg was asked whether social media had contributed to Donald Trump's win.
"A pretty crazy idea," he responded at the time. But after months of internal sleuthing by media organisations, congressional investigations, and Facebook itself, the idea doesn't look so far-fetched.
"Calling that crazy was dismissive and I regret it," Zuckerberg wrote in a Facebook post recently. "We will do our part to defend against nation states attempting to spread misinformation and subvert elections. We'll keep working to ensure the integrity of free and fair elections around the world, and to ensure our community is a platform for all ideas and force for good in democracy."
In fact, Facebook is planning on a stricter scrutiny of advertisements before they are published.
It is a startling turnabout. After years of defending themselves as communications networks, whose sole aim is to foster dialogue, social media companies like Facebook and Twitter are under increasing pressure to take responsibility for the content they carry. Search-engine giant Google is under similar pressure to reform after it, too, has promoted fake news stories, including extreme right-wing posts misidentifying the Las Vegas shooter and calling him a left-winger.
The proliferation of fake news is forcing these companies to rethink their role in society, their reliance on cheap algorithms rather than expensive employees, and their engineer-driven, data-dependent culture in an era when they are increasingly curating and delivering news.
"This is definitely a crisis moment for them," says Cliff Lampe, a professor and social media expert in the School of Information at the University of Michigan in Ann Arbor. "They're just trying to do their business. What they don't understand is that in the huge panoply of humankind, people are going to try to manipulate that business for their own ends."
It's clear that Facebook was aware that something was afoot with fake campaign stories as early as June 2016, when it detected a Russian espionage operation on its network and alerted the FBI, according to a Washington Post report. More hints of Russian activity popped up in the following weeks. Facebook's lengthy internal investigations have hit some paydirt, after the firm decided to narrow its search rather than try to be comprehensive.
Facebook recently handed over to congressional investigators more than 3,000 ads that ran between 2015 and 2017 linked to the Internet Research Agency, a Russian social media trolling group.
Some of the ads are drawing particular interest because they targeted pivotal voting groups in Michigan and Wisconsin, where Trump won narrowly. Investigators will probe to see if the Trump campaign played any role in helping the Russians target those ads.
But experts suspect the company has only scratched the surface. And the problem stretches beyond Facebook.
During the Republican primaries, Ron Nehring noticed something odd about his Twitter feed. The campaign spokesman for presidential hopeful Sen. Ted Cruz could go on cable television and bash any of Cruz's rivals without any social media blowback. But when he criticised Trump, his Twitter account would be deluged by a torrent of negative and "extremely hysterical" tweets.
"The tone was always extremely hysterical, not something that I would see from typical conservative activists," he said at a Heritage Foundation event this week.
It is tempting to say that Russia simply manipulated right-wing social media to support Trump's candidacy. The reality is stranger than that. While a preponderance of the fake posts promoted Trump or criticised his Democratic opponent, Hillary Clinton, on websites crafted to attract right-wing voters, some of them also appeared on sites catering to left-wing causes, such as Black Lives Matter, and religious ones, such as United Muslims of America.
The reach and speed of social media networks make it easy for these ideas to spread before they can be debunked. Facebook claims to have two billion users, or nearly a third of humanity. During the last three months of the presidential election, the top 20 fake election news stories on Facebook generated more shares, reactions, and comments on Facebook than the top 20 pieces from major news outlets, such as The New York Times, The Washington Post, and others, according to a BuzzFeed News analysis.
Among the most popular fake news stories, one said Clinton sold weapons to Daesh and another one claimed the pope endorsed Trump.
And the meddling continues. Part of the challenge lies in these digital giants' reliance on algorithms to make complex news decisions. Computer programmes are cheaper than real-life editors. They also offer political cover.
Facebook has used human editors in the past. But after Gizmodo reported that former employees routinely suppressed conservative news stories from users' trending topics, Zuckerberg met with conservative editors and moved back to algorithms.
But the algorithms are far from neutral. Until exposed by reporters, they allowed advertisers to exclude minorities from seeing ads and, until last month, target "Jew-haters." A more subtle and endemic problem is that the algorithms are geared to support social media's business model, which is to generate traffic and engagement.
Another challenge is that even as social networks become mainstream purveyors of news, they're still largely run by engineers who rely on data rather than editorial judgment to choose newsworthy content. That data-first mentality powers profits because it gives customers exactly what they want. But if they want fake news that supports their worldview, is it ethical to give it to them?
Last month, Zuckerberg pledged to "make political advertising more transparent" on Facebook, including identifying who pays for each political ad (as TV and newspapers already do) and ending the practice of excluding certain groups from seeing ads.
- The Christian Science Monitor
 


More news from