Facebook's burnt-out moderators are proof that it is broken

"Silence is golden when you can't think of a good answer."
-Muhammad Ali
Post Reply
User avatar
Christine
Site Admin
Posts: 2524
Joined: Thu Feb 19, 2015 10:29 pm
Has thanked: 4419 times
Been thanked: 4705 times
Contact:

Facebook's burnt-out moderators are proof that it is broken

Post by Christine »

Randy Maugans posted this article on where else but Facebook today with this comment:

"The aggregate of humanity, as represented on a social media platform, is an unholy mess...Facebook contracts over 15,000 burned-out human workers to police it's posts with over 1400 pages of rules. And fails.
This is the triumph of humans over A.I. We are too f**ked up to be predictable. Well done!"


While I chuckled and agree though I would add that we are also too un-fucked to be predictable. Imaginal cells in the soup of societal meltdown.
~~~
Way back in the 1950s, a pioneering British cybernetician, W Ross Ashby, proposed a fundamental law of dynamic systems. In his book An Introduction to Cybernetics, he formulated his law of requisite variety, which defines “the minimum number of states necessary for a controller to control a system of a given number of states”. In plain English, it boils down to this: for a system to be viable, it has to be able to absorb or cope with the complexity of its environment. And there are basically only two ways of achieving viability in those terms: either the system manages to control (or reduce) the variety of its environment, or it has to increase its internal capacity (its “variety”) to match what is being thrown at it from the environment.

Sounds abstruse, I know, but it has a contemporary resonance. Specifically, it provides a way of understanding some of the current internal turmoil in Facebook as it grapples with the problem of keeping unacceptable, hateful or psychotic content off its platform. Two weeks ago, the New York Times was leaked 1,400 pages from the rulebooks that the company’s moderators are trying to follow as they police the stuff that flows through its servers. According to the paper, the leak came from an employee who said he “feared that the company was exercising too much power, with too little oversight – and making too many mistakes”.

An examination of the leaked files, says the NYT, “revealed numerous gaps, biases and outright errors. As Facebook employees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others.” Moderators were instructed, for example, to remove fundraising appeals for volcano victims in Indonesia because a co-sponsor of the drive was on Facebook’s internal list of banned groups; a paperwork error allowed a prominent extremist group in Myanmar, accused of fomenting genocide, to stay on the platform for months. And there was lots more in this vein.

Some numbers might help to put this in context. Facebook currently has 2.27bn monthly active users worldwide. Every 60 seconds, 510,000 comments are posted, 293,000 statuses are updated and 136,000 photos are uploaded to the platform. Instagram, which allows users to edit and share photos as well as videos and is owned by Facebook, has more than 1bn monthly active users. WhatsApp, the encrypted messaging service that is also owned by Facebook, now has 1.5bn monthly average users, more than half of whom use it several times a day.

These figures give one a feel for the complexity and variety of the environment that Facebook is trying to deal with. In cybernetic terms, its approach to date has been to boost its internal capacity to handle the variety – the torrent of filth, hatred, violence, racism and terrorist content – that comes from its users and is funnelled through its servers. In the beginning, the CEO, Mark Zuckerberg, went for the standard Silicon Valley line that there is a tech solution for every problem – artificial intelligence (AI) would do the trick – although he had to concede that the technology was not sophisticated enough to do the job just yet.

As criticism mounted (and the German Bundestag began to legislate), the company went on a massive drive to recruit human moderators to police its pages. Facebook now employs 15,000 of these wretches, the cost of whom is beginning to eat into profit margins.

Many if not most of these moderators are poorly paid workers employed by external contractors in low-wage countries such as the Philippines. They have to implement – in split seconds – the confusing guidelines that were leaked to the NYT. One of the most useful aspects of the documents is the way they illustrate the impossibility of the task. The guidelines, says the paper, “do not look like a handbook for regulating global politics. They consist of dozens of unorganised PowerPoint presentations and Excel spreadsheets with bureaucratic titles like ‘Western Balkans Hate Orgs and Figures’ and ‘Credible Violence: Implementation standards’.”

[youtube]https://youtu.be/JA1DxRdT2hA[/youtube]

If you want to see what this kind of work involves, then a recent documentary, The Cleaners, filmed with the cooperation of Facebook moderators in Manila, makes sobering viewing. It shows that they have an impossible job and have to work under fierce time pressure to make their employer’s performance targets. Five seconds to make a judgment, thousands of times a day. And at the end of the shift, they go home, morally and physically exhausted.

These are the people who process Facebook’s waste so that nothing unclean appears in the news feeds of more affluent users in other parts of the world. To anyone with a moral compass, the fact that humans should have to do this kind of work so that a small elite in Silicon Valley can become insanely rich is an outrage. To a cybernetician, though, it is merely confirmation that Facebook is no longer a viable system.
Image
The journey, the challenge is to step into the
projection room and stop being lost in the script.
Post Reply

Return to “General discussions”