I tackled this thorny topic along with Shahrokh Shahidzadeh, CEO of Acceptto, George McGregor, VP Product Marketing at Imperva and Chris Morales, Head of Security Analytics at Vectra at a panel session held at the recent RSA 2019.
We began by discussing the impact of AI and machine learning on cybercriminals’ arsenals, and what this is likely to look like over the coming year.
Chris underlined that the key factor is time. How long an attack takes to harvest critical information versus how long the organization in question takes to detect that attack is the core battle at the heart of cybersecurity, and when we think about machine learning, this becomes a battle of automation. By using AI and machine learning to automate tedious or cumbersome tasks, dramatic speed increases can be achieved on both sides of the battle. George added that a major application of AI technology on the defense side of cyber warfare is its ability to mitigate the huge skills shortage and tsunami of alerts that most organizations today are dealing with.
I agreed with these points, emphasizing that the same skills shortages facing enterprises may also be felt by threat actors. The solution offered by machine learning and AI – automating and speeding up key tasks – applies to everyone. AI can automate exploits, control attacks and more.
We all agreed that the key challenge in this dynamic landscape is in using AI to infer and predict as well as merely mitigate threats. How can organizations use AI and machine learning in more sophisticated ways than their attackers – in making threat detection actionable, not merely speeding it up?
For me, whilst machine learning and AI are excellent threat detectors, in a threat response situation you still need core security controls in place to make use of what AI alerts you to. Micro-segmentation of your infrastructure, for example, will help ensure that if and when something infiltrates your environment, you can restrict and shut it down. Integrations between AI solutions and SIEMs are essential, and because AI is not perfect – it will always result in some false alarms or false positives – security personnel are still essential to ‘press the red button’ when a security incident occurs.
George and Chris further explored this emphasis on having the right security personnel in place, discussing their experiences with hiring data scientists, particularly from disciplines beyond traditional computing and IT areas. Whilst data scientists are crucial for developing the algorithms which underpin AI and machine learning solutions, security specialists are still integral in ensuring that those data scientists are asking the right questions. Data scientists turn questions into math problems – but security researchers guide those scientists to the right questions. As I pointed out, mis-quoting Lewis Carroll’s Cheshire cat, if you don’t know where you need to go, then it doesn’t matter which route you take.
Our discussion concluded with the closing thoughts that whilst AI and machine learning are dramatically increasing the pace and scale of cyber warfare, and are crucial tools in defending against cyberattacks, they still need to be augmented with traditional security technologies and highly-skilled personnel in order to be useful. I described how I teach my students of cybersecurity to apply adversarial thinking, and to give security policy the importance and focus it should get. AI and machine learning are excellent tools for introducing elements like behavioral analytics and automated malware detection into the security mix, but they still need to be underpinned by robust security policy, proper network controls that allow enforcing the policy – and you still need a skilled individual to write that policy.
To learn more about the future of cyber warfare, how it impacts enterprise security and lessons and recommendations to make your organization more resilient, watch the full panel session.
Receive notifications of new posts by email.