Facebook says plan in place to fight extremist content


Delegates at the Facebook journalism project training event in Dubai.-Photo by Shihab
Delegates at the Facebook journalism project training event in Dubai.-Photo by Shihab

Dubai - The company took five individual actions, and four collaborative actions after the Christchurch Call to Action.

by Dhanusha Gokulan

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Published: Wed 19 Jun 2019, 7:00 PM

Last updated: Wed 19 Jun 2019, 9:42 PM

In response to New Zealand's Christchurch massacre, social networking giant Facebook has adopted a nine-point plan to address the abuse of technology to spread violent extremist content.
However, despite the use of Artificial Intelligence (AI) to detect inappropriate content on Facebook, the tech company still depends on human interaction and Facebook users to report hate speech and extremist content on its platform.
Combatting fake news and policy regulations to deal with violent extremism online were two of the most hotly contended subjects at the region's first Facebook News Forum, hosted by the Dubai Press Club on Wednesday, June 19. The forum, held in partnership with the social networking website was part of the Facebook Journalism Project.
Jesper Doub, director of news partnerships, EMEA Facebook, told Khaleej Times on the sidelines of the forum that since the G7 government and industry leaders meeting in Paris titled 'Christchurch Call to Action', the company has taken several policy measures to ensure the curb of online extremism.
He said: "Since the Christchurch incident, we have put in a couple of rules to limit the misuse of the live streaming facility on Facebook. It was a very unfortunate event after which, we sat down and asked ourselves: 'What can we do to prevent this from happening'."
The company took five individual actions, and four collaborative actions after the Christchurch Call to Action. Under individual actions, it has updated its terms of use and enhanced its mechanism to improve user reporting of terrorist and violent extremist content, and enhanced digital fingerprinting and AI-based technology solutions.
For live streaming, Facebook has included enhanced vetting measures, such as streamer ratings or scores, account activity or validation processes, and moderation of certain live streaming events where appropriate. The company has also sworn to publish transparency reports. Facebook has also adopted four collaborative actions including the introduction of crisis protocols and working with NGOs to combat hate and bigotry.
Response time to hate content
Speaking about how fast Facebook responses to hate content, Doub said the application does not have a definitive timeline to pull down inflammatory content off the platform. He explained: "The team tries to prioritise and get to the most pressing issue as soon as possible. It depends on the content. We are in a situation where we are already taking proactive action on a certain type of content through machines, with the help of AI. For example, if AI detects content that should not be there, the machine takes it down without waiting for a human to interact with the post." However, for more complex content, where the machine cannot take the decision, there are people looking at it."
During one of the sessions, Facebook's public policy associate manager for content Eric Shadowens also urged participants to report inappropriate content on the platform.
Shadowden said Facebook has increased the number of people employed in its global safety and security division from 10,000 to 30,000, and has also installed improved automated systems to combat inflammatory content. Furthermore, a total of one million fake accounts are pulled down from Facebook every day.
Managing political campaigning
Doub also spoke to Khaleej Times on the various measures it has taken to keep political election campaigning safe and untouched. Doub spoke about the policies Facebook had set up to deal with the problem of "troll farms" and local legislation following the elections in the Philippines and India. He said: "We were very slow to react in the Philippines."
"When people and organisations want to boost their posts on political issues, then they need to identify themselves and make sure they are registered in the country. They need to make sure they are fully transparent, what they spend the money on, how much was spent, which groups were targeted, who spent the money, and what the ad looks like."
The ad then goes into an archive that stays fully transparent for seven years. "We have learned how groups and bad actors have been trying to tamper with elections globally, and we devised this policy based on this (our learnings)."

Dhanusha Gokulan
Dhanusha Gokulan

More news from