Facebook says it never incitement to hatred as the removal. But there’s a problem

Facebook says it never incitement to hatred as the removal. But there’s a problem

On 13 November, Facebook announced with great fanfare that contains essentially the degradation messages of incitement of its platform than ever. Facebook removed more than seven million cases of hate speech in the third quarter of 2019, the company reported an increase of 59% over the previous quarter. More and more it comes to hate (80%) not recognized by people now, they added, but automatically by artificial intelligence. The new statistics, however, still hides for Facebook to overcome a structural problem: not all the hate speech is treated equally. The algorithms currently used Facebook diatribes only work in some languages ​​to be removed. Which means it was easier for Facebook, the dissemination of racist and religious hatred, in line developed countries in the first place and the communities they dominate included global languages ​​such as English, Spanish and Mandarin. But in the rest of the world, it is as difficult as ever. Facebook says that the voice recognition algorithms of time functional hate (or “faces” as he calls them internally) has in more than 40 languages. In the other languages ​​of the world, Facebook has about its users and moderators human sedition police. Unlike algorithms that automatically says Facebook now 80% of contributions hated to acknowledge, without requiring the user to have first reported these human presenters do not regularly check the site to hate the word itself. Instead, their task is to decide Messages that users have already reported removed. The languages ​​are spoken by minorities are more affected by this discrepancy. This means that racial insults, incitement to violence, targeted abuses can spread rapidly around the world in developing than they do in the United States, Europe and elsewhere at the time. India, the second most populous country in the world, with over 1.2 billion people and nearly 800 languages, offers an insight into this problem. Facebook declined to share a complete list of languages ​​in which it has recognition algorithms diatribes at work. But the company tells TIME that the 22 official languages ​​of India, only four – are covered by Facebook algorithms – Hindi, Bengali, Urdu and Tamil. About 25% of the Indian population is not at least one of these four languages ​​or English, and about 38% do not speak it as a first language, according to an analysis time of the Indian census of 2011. In the state of Assam in the northeast of ‘India, this gap in the Facebook system has allowed violent extremism to thrive – without supervision by regulatory authorities and accelerated the power of Facebook anyone away Lyrics are the photos and videos. In Assam, the global advocacy group Avaaz to mention a constant campaign of hatred by the Assamese, largely identified Hindu majority against the Bengali language, minority mostly Muslim. In a report published in October, detailed Avaaz Facebook messages asking Bengali Muslims “parasites,” “rats” and “rapists” and request for Hindu girls are poisoned Muslims to prevent them from raping. The posts were viewed at least 5.4 million copies. The UN has the situation as there is a “potential humanitarian crisis.” Facebook has confirmed the time that is not a sedition algorithm, spoke in Assamese to record the main language in Assam. Instead of automatically hate speech in Assamese detection using Facebook an indefinite number of human presenters throughout the day, who speak the language. But presenters, for the most part, only respond to messages from other users list is available. Activists say Facebook dependence on user reports of hate speech in the languages ​​in which it has no algorithms puts too much of a burden for these victims of hate speech features that often are not very educated and already marginalized groups. “In the context, the Assamese minorities who hate speech soon face a Facebook often lack the on-line access or understand how to navigate Facebook reporting tools. No one else has reported for them, either,” says the Avaaz report. “This leaves Facebook with a large blind spot” Alaphia Zoyab, senior campaigner with Avaaz tells TIME. The solution Zoyab says no intervention is not human, it is more: more than Facebook employees are doing proactive search for hate speech, and to build a concerted effort, an Assamese diatribes record. “Unless Facebook select smarter to understand about the societies in which it operates, and ensures that leads people to the event Proaktiv, the sweep’die platform for violent content, in some of these smaller languages ​​we continue in this digital dystopia dangerous hatred, “says TIME. Technical problems, Facebook says that the reason can not automatically detect diatribes in Assamese – and other small languages ​​- is because it does not sufficiently large amount of data to train the artificial intelligence program that would do this. In a learning method machine called Facebook trained to the degree of its computers on a series of nastiness of them tens or hundreds of thousands of examples of hate speech. In English, which has 1.5 billion speakers, which is quite easy. But it does in smaller languages ​​like Assamese, only 23.6 million of speakers after India’s 2011 census, the most difficult. Add in the fact that not many hateful messages are labeled as hate speech in Assamese before, and it will be very difficult to detect a training program hatred in Assamese. But activists say that this does not make the actual unavoidable situation in Assam. If hatred against the Rohingya minority in Myanmar distributed virulent via Facebook in Burmese (a language spoken by about 42 million people) Facebook was slow to act, because in Burmese no voice of hatred recognition algorithm had and who speaks only a few Burmese moderators. But since the Facebook Rohingya genocide has to pay a hate-speech in Burmese classifier constructed of resources to the project. He paid 100 content presenters rent Burmese language, manually built a record of Burma inflammatory rhetoric that has been used to train an algorithm. Facebook declined to say how many presenters Assamese language busy after several requests from TIME. In a statement, Facebook said: “We do not break the number of auditors to language content, largely because the only non-representative number of people working on a particular language or theme, and the number varies according to the needs personal. We support our staff in a number of different factors, including the geopolitical situation on the ground and the volume of content in a specific written language. “Facebook says time is the priority of a list of countries in which they prevent their work has what it calls “offline Harms” defines it as real physical violence. Myanmar, Sri Lanka, India, Libya, Ethiopia, Syria, Cameroon, the Democratic Republic of Congo and Venezuela is on this list, a spokesman said. The company also revealed some time most of the 40 languages ​​in which he algorithms of recognition of the work diatribes. These include Mandarin and Arabic, and the two official languages ​​of Sri Lanka: Sinhalese and Tamil. The company is currently constructing a speech classifier hatred in Punjabi – another official Indian language that more than 125 million speakers around the world has. Facebook declined to disclose the success rate of the individual language open algorithms. So while Facebook’s global algorithms now recognize 80% of sedition before it is reported by a user, it is impossible to say whether it is an average that masks lower success rates in some languages ​​than others. Two officials from Facebook – an engineer working on algorithms hate speech, and member of Facebook “strategic response team” – TIME said that Facebook was building classifiers put in several new languages, but did not want to lose on the side until they were no longer careful to avoid contributions that down, that is not hated. But even if their algorithms Flag hateful content, Facebook says, human presenters always make the final decision whether to remove them. Facebook says its presenters typically respond to your reports within 24 hours. online it is marked during this period contributions as hateful. In “more than 50” languages, says Facebook, moderators 24 hours a day, seven days a week. But it is “significant overlap” between these 50-plus languages ​​and 40-plus languages ​​in which an algorithm is currently active Facebook says. In even more languages ​​moderators Facebook part-time employee. Since Facebook does not break down the number of the moderators of language content, it is also difficult to say whether there are differences between the languages ​​when it comes to how quickly and efficiently removed so hateful messages. According to Avaaz, minority languages ​​are overlooked when it comes to moderating speed as well. If Avaaz 213 shows the “best examples” of hateful messages in Assamese Book removed at the end Moderators 96. Some were discontinued within 24 hours; others took up to three months. The other 117 living examples of “blatant hate” on the page, according Zoyab. Other observers doubt to build an automated system the validity of that fear will end outsourcing of decisions to machines of that kind of language is acceptable. “Freedom of speech implications should arouse all extremely worried because we do not know what Facebook is leaving and degradation,” said Susan Benesch, manager of the Project dangerous Speech, a nonprofit that studies how public speaking can cause the real violence. “They take millions of pieces of content every day down, and no one knows where to draw the line.” “Prevention is written primarily” Benesch says, “it would be much, much more effective.” – Additional reporting by Emily and correction barons Lon Tweet / New York, November 27, the original version of this story false statements how long it took Facebook 96 of the 213 sites reported as hate speech Avaaz remove. Some, but not all, they were removed within 24 hours. Others have taken up to three months.
Image copyright 2019 by Getty Images Getty Images

Leave a Reply

Your email address will not be published. Required fields are marked *