Connect with us

Hi, what are you looking for?

Economy

In just 21 days, Facebook led new user to gore, fake news

IN FEB. 2019, Facebook, Inc. set up a test account in India to determine how its own algorithms affect what people see in one of its fastest growing and most important overseas markets. The results stunned the company’s own staff.

Within three weeks, the new user’s feed turned into a maelstrom of fake news and incendiary images. There were graphic photos of beheadings, doctored images of India air strikes against Pakistan and jingoistic scenes of violence. One group for “things that make you laugh” included fake news of 300 terrorists who died in a bombing in Pakistan.

“I’ve seen more images of dead people in the past 3 weeks than I’ve seen in my entire life total,” one staffer wrote, according to a 46-page research note that’s among the trove of documents released by Facebook whistleblower Frances Haugen.

The test proved telling because it was designed to focus exclusively on Facebook’s role in recommending content. The trial account used the profile of a 21-year-old woman living in the western India city of Jaipur and hailing from Hyderabad. The user only followed pages or groups recommended by Facebook or encountered through those recommendations. The experience was termed an “integrity nightmare,” by the author of the research note.

While Haugen’s disclosures have painted a damning picture of Facebook’s role in spreading harmful content in the US, the India experiment suggests that the company’s influence globally could be even worse. Most of the money Facebook spends on content moderation is focused on English-language media in countries like the US.

But the company’s growth largely comes from countries like India, Indonesia and Brazil, where it has struggled to hire people with the language skills to impose even basic oversight. The challenge is particularly acute in India, a country of 1.3 billion people with 22 official languages. Facebook has tended to outsource oversight for content on its platform to contractors from companies like Accenture.

“We’ve invested significantly in technology to find hate speech in various languages, including Hindi and Bengali,” a Facebook spokeswoman said. “As a result, we’ve reduced the amount of hate speech that people see by half this year. Today, it’s down to 0.05 percent. Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online.”

The new user test account was created on Feb. 4, 2019 during a research team’s trip to India, according to the report. Facebook is a “pretty empty place” without friends, the researchers wrote, with only the company’s Watch and Live tabs suggesting things to look at.

“The quality of this content is… not ideal,” the report said. When the video service Watch doesn’t know what a user wants, “it seems to recommend a bunch of softcore porn,” followed by a frowning emoticon.

The experiment began to turn dark on Feb. 11, as the test user started to explore content recommended by Facebook, including posts that were popular across the social network. She began with benign sites, including the official page of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party and BBC News India.

Then on Feb. 14, a terror attack in Pulwama in the politically sensitive Kashmir state killed 40 Indian security personnel and injured dozens more. The Indian government attributed the strike to a Pakistan terrorist group. Soon the tester’s feed turned into a barrage of anti-Pakistan hate speech, including images of a beheading and a graphic showing preparation to incinerate a group of Pakistanis.

There were also nationalist messages, exaggerated claims about India’s air strikes in Pakistan, fake photos of bomb explosions and a doctored photo that purported to show a newly-married army man killed in the attack who’d been preparing to return to his family.

Many of the hate-filled posts were in Hindi, the country’s national language, escaping the regular content moderation controls at the social network. In India, people use a dozen or more regional variations of Hindi alone. Many people use a blend of English and Indian languages, making it almost impossible for an algorithm to sift through the colloquial jumble. A human content moderator would need to speak several languages to sieve out toxic content.

“After 12 days, 12 planes attacked Pakistan,” one post exulted. Another, again in Hindi, claimed as “Hot News” the death of 300 terrorists in a bomb explosion in Pakistan. The name of the group sharing the news was “Laughing and things that make you laugh.” Some posts containing fake photos of a napalm bomb claimed to be India’s air attack on Pakistan reveled, “300 dogs died. Now say long live India, death to Pakistan.”

The report — entitled “An Indian test user’s descent into a sea of polarizing, nationalist messages” — makes clear how little control Facebook has in one of its most important markets. The Menlo Park, California-based technology giant has anointed India as a key growth market, and used it as a test bed for new products. Last year, Facebook spent nearly $6 billion on a partnership with Mukesh Ambani, the richest man in Asia, who leads the Reliance conglomerate.

“This exploratory effort of one hypothetical test account inspired deeper, more rigorous analysis of our recommendation systems, and contributed to product changes to improve them,” the Facebook spokeswoman said. “Our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages.”

But the company has also repeatedly tangled with the Indian government over its practices there. New regulations require that Facebook and other social media companies identify individuals responsible for their online content — making them accountable to the government. Facebook and Twitter, Inc. have fought back against the rules. On Facebook’s WhatsApp platform, viral fake messages circulated about child kidnapping gangs, leading to dozens of lynchings across the country beginning in the summer of 2017, further enraging users, the courts and the government.

The Facebook report ends by acknowledging its own recommendations led the test user account to become “filled with polarizing and graphic content, hate speech and misinformation.” It sounded a hopeful note that the experience “can serve as a starting point for conversations around understanding and mitigating integrity harms” from its recommendations in markets beyond the US

“Could we as a company have an extra responsibility for preventing integrity harms that result from recommended content?,” the tester asked. — Bloomberg

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the daily email that makes reading the news actually enjoyable. Stay informed and entertained, for free.
Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

Latest

Economy

GROSS BORROWINGS by the National Government had reached P2.75 trillion as of end-October as it continued to raise money for its pandemic response, preliminary...

Economy

By Luz Wendy T. Noble, Reporter THE PHILIPPINE Statistics Authority (PSA) said the base year for the consumer price index (CPI) will change to...

Economy

SOME INDIAN COMPANIES, including those from the pharmaceutical industry, are interested in investing in economic zones in the Philippines, according to an India Business...

Economy

By Keren Concepcion G. Valmonte, Reporter HOSPITALITY GROUPS expect a rebound in tourism as coronavirus disease 2019 (COVID-19) vaccination rates continue to improve and...

Economy

BUSINESSWORLD’s Luz Wendy T. Noble was recognized as the Best Reporter of the Year for Banking at the 30th annual awards of the Economic...

Economy

THE Securities and Exchange Commission (SEC) has flagged eight more entities in separate advisories for their unregistered investment solicitation programs. These offerings are PH...

You May Also Like

Investing

Having a good Instagram marketing agency to back up your Instagram account is an absolute must going into the new year. With competition stronger...

Economy

Ivermectin, an existing drug against parasites including head lice, has had a checkered history when it comes to treating COVID-19. The bulk of studies...

Investing

As a traditionally rigid insurance industry becomes bogged down by antiquated processes and operations, a handful of industry leaders are seeking to shake things...

Investing

Insomnia is the most common sleep disorder in the global population. Therefore, it is a problem that many people suffer or have suffered throughout...

Disclaimer: SmartRetirementReport.com, its managers, its employees, and assigns (collectively "The Company") do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

Copyright © 2021 SmartRetirementReport. All Rights Reserved.