In The Age Of AI, The Human Factor Still Matters For Cybersecurity
It’s no secret that both the public and private sector are turning to AI and automation in their fight against cyber-attacks. However, while it’s important to use every tool in the arsenal to be effective, this strategy must not be at the expense of people – rather, it must be blended with them.
AI is beginning to alter many sectors. Although the pace of adoption may have slowed, it continues to be adopted by businesses across all industries. And though some of its popularity may be the result of what Forrester refers to as “AI washing” – exaggerated claims around its potential and capability – there’s no denying the technology’s utility. The analyst firm also suggests, however, that 2019 will be the year when some of the hype wears off; when businesses finally realize what it can – and can’t – do.
The place for AI
Only those who have been hiding under a bush will be unaware of the growing number of credible sources suggesting organizations invest in AI solutions from a cybersecurity standpoint. It is seen as a valuable tool for addressing the growing volume and complexity of attacks which typically overwhelm both people and technology.
Broadly, the suggestion is that AI is used to do the heavy lifting in the fight against cyber-attacks. The theory is that, by automating the more repetitive and mundane aspects of collecting and analyzing security data, an organization’s existing staff can focus on more important areas.
I wouldn’t argue with either of these points. AI is becoming increasingly sophisticated and the operational efficiency benefits of automation have been widely proven. However, organizations need to be careful of an over-reliance on automation. It is not a panacea and security teams should still be present in front-line roles. Even Elon Musk has admitted humans are underrated, stating in a 2018 interview that Tesla should employ more humans.
The human touch
The importance of having highly trained people in the trenches should not be underestimated. Unlike AI solutions whose outcomes are based on (admittedly huge) rule sets, people are capable of abstract thought. This is crucial when it comes to tackling cyber-attacks. Every attack currently has a human point of origination after all, and even the most sophisticated AI and machine learning algorithms can’t truly understand – or hope to mimic – the chaotic and diverse nature of the human mind. Years in cyber skills training has taught me that the best talent is not necessarily that which has been classically trained. Rather, it is people who bring characteristics such as stubbornness, creativity, abstraction and even downright surrealism to bear in their problem-solving. As Douglas Adams said, “Solutions nearly always come from the direction you least expect, so there’s no point looking there.” To lose this in a security team, is to misunderstand your foes.
It’s worth considering, too, that people are responsible for implementing and training AI in the first place, itself a complex process. It is crucial this is done correctly, as it sets the tone for how the machine itself ‘thinks’ and what it identifies as friend or foe. For a period after any new solution has been deployed, a member of your security team must play mother, teaching it right from wrong. The outcomes of doing this incorrectly are more serious than a meeting with the headteacher.
Paradoxically, every algorithm is capable of getting things wrong because they are built and tested by humans. This opens them up to potential bias that only a front-line person will be able to notice.
Taking a longer view
Ultimately, despite the hype and the ‘AI washing’, the technology remains at the edge of a nascent space. In a world where every piece of code could be hiding a potential threat, a blended strategy which integrates human skills with automation provides more coverage. In the same way vendors have been talking about layering technologies for years, this is true of people and AI. At its current stage of development, ripping and replacing people for machines may fix technical vulnerabilities, but it risks leaving far bigger human ones for exploitation.