Can Artificial Intelligence kill social bias on FB?

Top Stories

Can Artificial Intelligence kill social bias on FB?

For Facebook, the challenge is maintaining that advertising advantage, while preventing discrimination, particularly where it's illegal.

By Ellen Powell (Tech Edge)

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Published: Sat 11 Feb 2017, 11:09 PM

When faced with a challenge, what's a tech company to do? Turn to technology, Facebook suggests.
Following criticism that its ad-approval process was failing to weed out discriminatory ads, Facebook has revised its approach to advertising, the company announced on Wednesday. In addition to updating its policies about how advertisers can use data to target users, the social media giant plans to implement a high-tech solution: machine learning.
In recent years, artificial intelligence has climbed off the pages of science fiction novels and into myriad aspects of everyday life, from internet searches to health care decisions to traffic recommendations. But Facebook's new ad-approval algorithms wade into greener territory as the company attempts to utilise machine learning to address, or at least not contribute to, social discrimination.
"Machine learning has been around for half a century at least but we're only now starting to use it to make a social difference," Geoffrey Gordon, an associate professor in the Machine Learning Department at Carnegie Mellon University in Pittsburgh, Penn., tells The Christian Science Monitor in a phone interview. "It's going to become increasingly important."
Though analysts caution that machine learning has its limits, such an approach also carries tremendous potential for addressing these types of challenges. With that in mind, more companies - particularly in the tech sector - are likely to deploy similar techniques.
Facebook's change of strategy, intended to make the platform more inclusive, follow the discovery that some of its ads were specifically excluding certain racial groups. In October, nonprofit investigative news site ProPublica tested the company's ad approval process with an ad for a "renter event" that explicitly excluded African-Americans. The Fair Housing Act of 1968 prohibits discrimination or showing preference to anyone on the basis of race, making that ad illegal - but it was nevertheless approved within 15 minutes, ProPublica reported.
Why? Because while Facebook doesn't ask users to identify their race and bars advertisers from directing their content at specific races, they have a host of information about users on file: pages they like, what languages they use, and so on. This kind of information is important to advertisers, since it means they can improve their chances of making a sale by targeting their ads toward people who are more likely to buy their product.
But by creating a demographic picture of a user, this data may make it possible to determine an individual's race, and then improperly exclude or target individuals. The company's updated policies emphasize that advertisers cannot discriminate against users on the basis of personal attributes, which Facebook says include "race, ethnicity, color, national origin, religion, age, sex, sexual orientation, gender identity, family status, disability, medical or genetic condition."
There's a fine line between appropriate use of such information and discrimination, as Facebook's head of US multicultural sales, Christian Martinez, explained following the ProPublica investigation: "a merchant selling hair care products that are designed for black women" will need to reach that constituency, while "an apartment building that won't rent to black people or an employer that only hires men [could use the information for] negative exclusion."
For Facebook, the challenge is maintaining that advertising advantage, while preventing discrimination, particularly where it's illegal. That's where machine learning comes in. "We're beginning to test new technology that leverages machine learning to help us identify ads that offer housing, employment or credit opportunities -the types of advertising stakeholders told us they were concerned about," the company said in a statement on Wednesday. The computer "is just looking for patterns in data that you supply to it," explains Professor Gordon.
That means Facebook can decide. which areas it wants to focus on - namely, "ads that offer housing, employment or credit opportunities," according to the company - and then supply hundreds of examples of these types of ads to a computer. If a human "teaches" the computer by initially labeling each ad as discriminatory or nondiscriminatory, a computer can learn to go "from the text of the advertising to a prediction of whether it's discriminatory or not," Gordon says.
This kind of machine learning - known as "supervised learning" - already has dozens of applications, from determining which emails are spam to recognizing faces in a photo.
But there are certainly limits to its effectiveness, Gordon adds.
"You're not going to do better than your source of information," he explains. Teaching the machine to recognize discriminatory ads requires lots of examples of similar ads. "If the distribution of ads that you see changes, the machine learning might stop working," Gordon explains, noting that these changing strategies on the part of content producers can often get them past AI filters, like your email spam filter. Insufficient understanding of details on the part of machines can also lead to high-profile problems, like Google Photos, which in 2015 mistakenly labeled black people as gorillas.
"Teaching" the machine also means having a person take the time to go through hundreds of ads and label them, as well as continue to check and correct a machine's work. That makes the system vulnerable to human biases. "That process of refinement involves sorting, labeling and tagging - which is difficult to do without using assumptions about ethnicity, gender, race, religion and the like," explains Amy Webb, founder and CEO of the Future Today Institute, in an email to the Monitor. "The system learns through a process of real-time experimenting and testing, so once bias creeps in, it can be difficult to remove it."
More overt bias issues have already been observed with AI bots, like Tay, Microsoft's chatbot, who repeated the Nazi slogans fed to her by Twitter users. While this bias may be more subtle, since it is presumably unintentional, it could conceivably create persistent problems.
Unbiased machine learning "is the subject of a lot of current research," says Gordon. One answer, he suggests, is having a lot of teachers, since it offers a consensus view of discrimination that may be less vulnerable to individual biases. - Christian Science Monitor


More news from