The eternal dilemma of Facebook’s balancing act

Social network cannot please everyone with its approach to graphic content

Inside Facebook: Secrets of the Social Network uncovers the policies of Facebook’s content review department, based in Dublin. Video: Channel 4

The ability of people to express themselves on Facebook gives some the chance to share family pictures and their personal views. For others, it is an opportunity to post the most offensive, violent and abusive content imaginable. Facebook has to balance the two.

But only Facebook will really know if it feels it is getting the balance correct, as it is the one with all the data. The public cannot see the complexity, or scale of the problem it faces.

Doubly, it must decide to act as an editor to ban content itself, or wait for others to come to them to complain about material posted. So far, it has focused on the latter.

"The money we make is by people using the service and seeing ads within their newsfeed," said Facebook's vice-president of global policy, Richard Allan, replying to a question from Channel 4's Dispatches, which investigated the company's conduct.

READ MORE

It’s an honest admission of how Facebook works, for those who didn’t know. But the question from presenter Krishnan Guru-Murthy was a good one. “You’re putting the onus on the victim here – to complain to you. Why aren’t you taking this content down?”

Replying, Allan highlighted the difficulty Facebook faces. If it takes material down – “even if the issue is painful” – then “people will say to us, look, Facebook, you should not interfere with my ability to highlight a problem that’s occurred.”

To some degree, therein lies Facebook’s dilemma.

Similar problems

Besides being home to Facebook, Dublin is also home to Storyful, which often faces similar problems. When I worked at Storyful from 2010 to 2014, we had to watch and filter in real-time content that was frequently graphic or violent.

But our mission was somewhat different. It was not to facilitate debate or stand over free speech rights. Instead, it was to ensure the content was real and showed what it claimed to show.

Storyful spent an enormous amount of time trawling social media’s darkest recesses. Therefore, I have some sympathy for Facebook. And there is one thing that Facebook says that is certainly true. Getting the balance right is often difficult.

For example, Facebook groups related to the Syrian conflict were deleted en masse. Many provided valuable eyewitness testimony. Often they detailed crimes against humanity. Such testimony, possibly even evidence, is now lost to history.

How is a balance struck between graphic content depicting the deaths of innocent Syrians and the rights of users not to be exposed to such content? Is it even Facebook’s job to moderate or host it?

Problems

Facebook’s problems will not be solved by hiring more content moderators or building new AI (artificial intelligence) tools. Rather, the documentary highlighted a number of fundamental problems with Facebook itself.

First off, the system can be gamed. If Allan didn’t make it clear enough, users have already figured out that if they post a video containing violence, and then praise that violence, the post will likely be removed by moderators eventually.

But if the user or page posts the same video, with the intent of gaining likes, views or popularity, and simply “condemns” it, then Facebook will take no action. How does Facebook define sarcastic condemnation?

Such “gaming” will be an intractable problem, unless Facebook changes its rules and prohibits violent content entirely.

Secondly, Facebook is likely to continue hiring third-party contractors to do the work. Arguably this is an abrogation of responsibility since they are not part of Facebook itself, so are entirely dispensable.

In time, Facebook staff and contractors will “train” machines, or what is known as supervised learning, where machines are taught to recognise such content automatically. In the future, Facebook could dispense with humans, despite the fact machines can get things wrong too.

And third, is something more fundamental. Facebook is a global company. Its users’ expectations vary from country to country. Indeed the definition of free speech varies. Facebook is trying to please everyone as it grows. But it cannot please all of the people, all of the time.

Squaring the circle

Is Facebook – structurally – capable of squaring this circle? Are its incentives permanently skewed not in favour of a healthy environment, but in favour of engaged or addicted users?

Can a platform that depends on user-generated content and dependent, too, on users constantly looking at each others’ content also act as a police force that decides what is, or is not, acceptable, particularly when certain “bad” content makes users highly engaged?

Are Facebook’s monetary and user-attention led incentives aligned with incentives to create the best environment – psychologically and emotionally – for its users? One would have to argue not.

Or, to put it another way, will Facebook ever encourage people to stop using the platform, to not share or comment on that violent video, to not be angry, to not spend hours posting about the issue du jour, and delete its app?