Business Technology

How algorithms are now ruling our lives

10 min read

30 June 2017

We’ve all heard the terrifying statistics around workforce automation and how our jobs are at risk of being taken over by robots and algorithms. In fact, already, we see it happening with ATMs, self-checkout counters and online gambling.

AI is extremely powerful. Its algorithms and detailed processes allow for unprecedented accuracy when it comes to results; results more accurate than even the human brain can produce.

Take Tinder as an example; the app that revolutionised the world of dating by simply removing the middleman: the “introducer”. In the same breath, Uber has replaced hailing black cabs, mobile banking threatens cashiers and apps like Deliveroo and Just Eat have changed the way we order food.

Taking the luck out of gambling

With AI, it has become possible to remove a significant chunk of the “luck” element of gambling. For example, imagine if statistical processes and algorithms were used to predict the outcome of sporting events, such as football and horse racing. Bettors would be able to make decisions by factoring in various statistics. This is a unique approach, and one where man and machine are working together to predict the outcome of a sporting event.

As gambling is an industry that is centred around probability and statistics, it makes perfect sense to integrate AI into predicting whether there will be over or under 3.5 goals in a football match.

Thanks to these applications, we have become a generation that simply run our lives through a device; in fact, we have never been more independent. The immediacy and accessibility promised by technology means that the middleman has been stripped back time and time again.

This is where Big Data comes in

Big Data has generated new tools and ideas on an unprecedented scale, with applications spreading from marketing to human resources, to university applications, insurance and much more. At the same time, Big Data has opened opportunities for a whole new class of professional manipulators, who take advantage of people using the power of statistics.

Algorithms run everything from taxis, to advertising, to who we end up going on a date with. They’re used to sift through CVs, check our credit and decide whether we’ll get insurance. In a nutshell, Big Data has turned information into power.

This data helps build tailor-made profiles that can be used for or against someone in a given situation. Insurance companies, which historically sold car insurance based on driving records, have more recently started using such data-driven profiling methods. In fact, it’s been suggested that some insurance companies are charging people with low credit scores and good driving records more than people with high credit scores and a drunk driving conviction.

It’s become standard practice for insurance companies to charge people not what they represent as a risk, but what they can get away with. The victims, of course, are those least likely to be able to afford the extra cost, but who need a car to get to work. But algorithms aren’t inherently evil: they’re tools used to simplify decisions, increase efficiency and offer convenience. But when locked away we can’t understand how they work or even if they work at all.

Consider online recruitment; this is saving companies tons of money in human resources hires but they are also almost entirely opaque. In other words, the process is treated as a money-saving black box, but it’s not clear what that black box is actually doing. However, more employers are turning to these modelled ways of sifting through job applications. Even when wrong, their verdicts seem beyond dispute.

Read more about algorithms in the workplace…

Finding work used to be largely a question of whom you knew. For decades, that was how people got a foot in the door. Candidates then usually faced an interview, where a manager would try to get a feel for them. All too often this translated into a single basic judgment: is this person like me (or others I get along with)? The result was a lack of opportunity for job seekers, especially if they came from a different race, ethnic group, or religion. Women also found themselves excluded by this insider game.

Today, human resources managers rely on data-driven algorithms to help with hiring decisions and to navigate a vast pool of potential job candidates. These software systems can in some cases be so efficient at screening resumes and ‘evaluating’ people that over 70 per cent of CVs are weeded out before a human ever sees them. But there are drawbacks to this level of efficiency. Man-made algorithms are fallible and may inadvertently reinforce discrimination in hiring practices.

Any HR manager using such a system needs to be aware of its limitations and have a plan for dealing with them as in affect, algorithms are, in part, our opinions embedded in code. They reflect human biases and prejudices that lead to machine learning mistakes and misinterpretations.

This bias shows up in numerous aspects of our lives, including algorithms used for electronic discovery, teacher evaluations, car insurance, credit score rankings, and university admissions. At their core, algorithms mimic human decision making. They are typically trained to learn from past successes, which may embed existing bias.

However, it was all created with the best intentions in mind. By having computer programs sifting through thousands of CVs or loan applications in a second or two, putting the most promising candidates at the top, did not only save time but it was also marked as a fair and objective way of doing things. After all, it didn’t involve prejudiced humans digging through reams of paper, just machines processing cold numbers.

As this trend quickly became the norm, mathematics was asserting itself as never before in human affairs, and the public largely welcomed it. The goal was to replace subjective judgments with objective measurements in any number of fields – whether it was a way to locate the worst performing teachers in a school or to estimate the chances of a person staying in his job for more than a year.

But although their popularity relies on the notion they are objective, algorithms that power the data economy are based on choices made by fallible human beings. And, while most of them are made with good intentions, the algorithms encode human prejudice, misunderstanding and bias into automatic systems that increasingly manage our lives. As mentioned before, these mathematical models are opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, are beyond dispute or appeal.

As a result, there’s enormous opportunities for manipulation in big data, and there’s something to be said to remain a bit skeptical and vigilant about this process. Biases and prejudices will most likely continue to play a role in recruitment processes, whether its algorithmic or human methods. But as big data, machine-learning algorithms, and people analytics take on a larger and more influential role in recruiting, it is reasonable to question to what degree we can really rely on technology.

It may be possible to predict what personal attributes would be required for success in a role but can we really write an algorithm that can determine the potential to succeed in a future we don’t yet understand? Should we continue to rely on the human approach, riddled with bias and a poor record of decision making? Perhaps the answer sits in the middle.

Written by Alex Kostin at