AD

Is AI capable of being a moral compass? Facebook’s anti-terror dreams are fantasy

Mark Zuckerberg declared in a manifesto announcement that Facebook is training AI to spot harmful content in order to take it down from the site. Putting aside its Orwellian tone, for many it raises the important question: Can AI ever be trained to act as a moral compass?
AD

As was suggested in Mark Zuckerberg’s manifesto, Building Global Community, there are a variety of solutions that would constitute monitoring content on Facebook using AI. These range from using traditional machine learning techniques to catch hoax articles to a more revolutionary (and not yet achieved) advancement in AI research – but what about the ever-important moral compass query?

The revolutionary research Zuckerberg speaks of might take the form of an AI capable of evaluating a piece of content’s topic, perspective and sentiment with superhuman ability. It is important to recognise the enormous gap between these two solutions and this forces us to clarify what we mean when we talk about artificial intelligence.

Understanding a piece of content, its perspective on a particular topic, whether its satire or genuine opinion and the degree to which it might offend, requires at least human-like levels of comprehension. Creating a complete AI “superhuman” program capable of understanding such a broad range of nuances is unlikely what Zuckerberg had in mind; nor is it needed for Facebook to better understand online content.

With a robust set of features that could help characterise a piece of content, Facebook would use an algorithm to cluster content into different topics and classify potentially dangerous content. It is important to note that the rules for classification would not be explicitly programmed. Instead, an algorithm would be given examples of, for example, dangerous and non-dangerous content and the algorithm would uncover the relevant relationships itself. This would be no easy task.

It leads us to question how Facebook will decide which content will be deemed within limits when the markers of mankind’s moral compass is constantly evolving. The algorithm would need to uncover nuanced semantic differences between news stories about terrorism and terrorist propaganda itself. It would need to be able to distinguish graphic content from war photography. These challenges are particularly difficult considering the scale Facebook operates on.

Any machine learning solution deployed by Facebook thus demands a high degree of accuracy. It is this scale that makes using AI to review content necessary and this comes with its own challenges – like the moral compass factor. A solution with even 99.9 per cent accuracy leaves about 2m of Facebook’s nearly 2bn users negatively affected. The level of accuracy required and the downside of misclassifying dangerous content distinguishes Facebook’s goal from the work of other brands working on artificial intelligence.

It’s essential as part of search marketing that machine learning is incorporated as it classifies search keywords into product categories to get a better understanding of clients’ opportunity online. As brands continue to innovate and develop these artificial intelligence solutions, we will see increased reliance on machine learning to operate at greater scale.

While a human-level comprehension of content likely amounts to an artificial general intelligence problem, this is not what Facebook need to develop to more effectively monitor content. We have looked at how further development in existing artificial intelligence research shows a clear path to this goal and some of the challenges Facebook must overcome in doing so.

Josh Carty is a media executive at iProspect

Image: Shutterstock

Share with your network

Follow Real Business:

Real Business