The buzz around artificial intelligence (AI) won’t be dying down any time soon.
It’s a term which can mean different things to different people. In business, however, it refers to the emergence of self-teaching algorithms capable of learning from the ever-expanding universe of digital “big data” generated in our work and personal lives.
This coming-together of information and ever-more powerful tools with which to analyse, learn from and base predictions upon is no hype-fuelled pipe dream – the revolution is well underway. In fact, Gartner analysts predict that by 2021 AI will free up 6.2 billion hours of workers’ time, while generating $2.9 trillion in business growth.
Unfortunately, the opportunities it offers aren't only being leveraged by those with good intentions. Hackers, scammers, phishers, and fraudsters of all flavours could seize the opportunity to put advanced AI algorithms to work for nefarious ends. Doesn’t it seem increasingly common to wake up to the news that yet another multinational organization, handling sensitive data of millions of its customers, has fallen victim to this high-tech cybercrime?
From predicting passwords without resorting to “brute force” methods of trying every possible combination of letters and numbers, to circumventing anti-fraud and even anti-malware measures put in place by financial institutions and software vendors, self-learning algorithms are a new security threat.
The flip-side is that those whose job it is to protect us from these threats – cyber security engineers – have the same technologies at their disposal. This has led to a new chapter in the “arms race” which has taken place since the dawn of the computer age, as both sides race to outwit each other using the latest technology available.
However, now there’s one big difference - it isn’t just the wits of the engineers on either side of the divide which are pitted against each other – it’s the wits of the competing AI systems themselves.
In fact, it has got to the point that AI is a necessity when it comes to cybersecurity. When protecting cloud-based systems of the scale deployed by financial, healthcare and industrial enterprises today, humans alone can’t hope to stay on top of every possible threat and attack vector.
This data-driven, self-teaching model of AI is often referred to as “machine learning” – a technique used to build “intelligent” algorithms which were first developed in the last century. However, it’s only in the last decade or so that it’s become truly useful to businesses, thanks to the critical mass of data and processing power now available.
In particular, the development of deep learning, using algorithmic structures known as deep neural networks, means computers are capable of learning faster than ever, and putting what they learn to work, essentially to create computer programs themselves.
We can see examples of this new technology in operation when we look at one of the most traditional applications of cybersecurity – the good old fashioned password. Since we started using networked computers for day-to-day business, the conventional method of defence has been the username and password combination. In the age of AI, this looks woefully inadequate, as hackers can spoof identities through AI in the digital world and employ social engineering (tricking users into divulging details, e.g., through "phishing" attacks) in the real world.
On the security side of the fence, this has given rise to a host of counter-measures, including two-factor authentication (2FA), biometric locks (fingerprint or facial recognition) and location or behavioural checks. AI comes into the equation when machine learning is used to “join the dots” of this multi-factor authentication (MFA) and accurately predict whether a user requesting access to a system has a right to do so.
AI-powered MFA means that the increasingly valuable data we all have spread across clouds which are operated by a multitude of organizations are statistically more likely to be safe. It's impossible to say it's every 100% safe, of course, but the probability of it being stolen decreases as the layers of authentication increase.
Another facet of today’s technological revolution which, while offering incredible opportunities, also exposes us to all sorts of new threats, is the Internet of Things (IoT).
In the past, we would traditionally only have to worry about hackers getting into our computers. In today's world of networked "things" – from phones to smartwatches to cars, kitchen appliances, and industrial machinery – the number of attack vectors available to hackers has increased exponentially.
The typical large organization is now reliant on a large number of technology providers and partners. This not only means we must put our faith in the ability of their defences to keep our data safe. It also means we have to consider how our systems could be targeted by malicious actors seeking access to our partners' data – effectively turning us into attack vectors. In the IoT age, the security of a system is only ever as strong as its weakest link, and if that turns out to be us, we make ourselves liable for damages caused by data loss or theft anywhere along the chain.
Once again this is where AI has a role to play in cybersecurity – joining the dots between disparate, cloud-based systems which could involve servers spread across countries, continents or the entire world – to understand where leaks are likely to occur and where attackers are likely to strike.
AI, IoT, cloud computing and Big Data are often thought of or discussed as separate facets of the "fourth industrial revolution" we find ourselves in the middle of. In reality, when it comes to cybersecurity, they are all parts of the same puzzle. This means the interactions between them need to be carefully considered as part of the solution.
When considering cybersecurity strategies we need to look not just at our own data and systems, but those of our partners – clients whose data we work with, and those who provide the tools and technology we employ day-to-day.