AI is without doubt one of the hot topics of 2018. Not only in the business space – where the AI discussion is about automation, improved operations, personalisation and analytics – but also in the consumer world, where Amazon Alexa and Google Assistant is all the rage.
Most bosses are still in the discovery stage of how to benefit from and use AI. By focusing on the problems to be solved they can mostly leverage existing cloud and open source technology. But to really get an edge in AI, considerable investments are required in core research, education and research communities. Google is a perfect example of a company putting in the groundwork as its launching new AI research centres around the world with the most recent being in France this year.
Thanks to the speed of innovation and potential impact on society this is also drawing interest from governments, to ensure AI is deployed ethically and safely. Prime minister Theresa May, for example, called for international co-operation at Davos. She intends for the UK to lead the charge on developing ethical rules for the use of technological breakthroughs such as AI, and this comes after the UK Autumn Budget where a Centre for Data Ethics and Innovation was announced.
There is no doubt that the future for AI is vast. AI algorithms used at the edge of the network will mean machines can act independently of humans controlling them, consequently savings businesses money and time. There will also be a great deal of information coming from these machines that can provide insight into operation – from downtime to trends – all of which can help shape products and services of the future.
But with this there also comes potential risks and threats as computers make the wrong decisions and we lose control over the technology. It is therefore paramount that companies take notice of the likes of Bill Gates, Elon Musk and Theresa May who are recognising potential issues with the technology. Security and privacy, for example, need to be built-in from word go to avoid devices at the endpoint being taken over by hackers.
The transition to a workforce utilising AI also needs to be carefully considered. While there is undoubtedly fear about robots taking over jobs and consequential job losses, this doesn’t need to be the case. The opportunity presented by AI should enhance jobs, not take them away and it is this change that needs to be carefully managed and communicated by all those involved. Previous transitions such as the industrial and information technology revolutions generated more jobs than they took away.
AI will give people the power to move away from repetitive, less stimulating tasks and allow them to take a more analytical, thoughtful role. The ability to use AI to automate and sift through data, while drawing out trends, gives humans all the facts they need to come to meaningful insights and decisions on the data that’s being collated; something they would have been unable to do whilst embedded in the granular detail of inputting numbers.
It also has the potential of making society safer thanks to e.g. autonomous cars and smarter cities. What’s most important is that AI isn’t feared, the scope of possibilities will change as will the potential issues, but it is how these are embraced that will shape the future. In the right place, jobs can be enhanced, data becomes more valuable, analytics more insightful and lives improved.
Therefore it’s great that governments are getting involved early on to help manage the transition. Just like any change in technology or in market conditions, there will be a knock on effect; it’s about making sure that the positives outweigh the negatives!
Magnus Jern is chief innovation officer at DMI
Share this story