By Shaun Wakefield, Senior Security Consultant at Reliance Cyber
As a Penetration Tester I am always keen to understand the different applications and technologies that are out there disrupting the marketplace. It goes without saying that AI has been a topic of much discussion, particularly in recent months. With tools such as Chat GPT frequently appearing in the news, AI has truly captured the zeitgeist of the moment and in doing so, my attention. The more I have learned about AI, the more astounded I have been at how quickly it has advanced, not simply entering common parlance but meshing into our work environments as a legitimate workplace tool.
This acceleration in capability and adoption has got me thinking about the future of cyber security and whether or not I should be concerned about the role I play.
What does Artificial Intelligence mean for my job and the future of cyber security?
The ‘natural’ next step was to pose this question to ChatGPT. Here’s what it had to say:
“While AI can be used to enhance cyber security and improve defences against cyberattacks, it can also present certain dangers. Here are some potential risks:
Adversarial attacks;
Privacy concerns;
Autonomous attacks;
Bias and discrimination;
Lack of transparency.
Overall, while AI can be a powerful tool for improving cyber security, it is important to be aware of the potential risks and take steps to mitigate them. This includes rigorous testing and validation of AI systems, monitoring for bias and discrimination, and ensuring transparency in decision-making processes.”
The response suggests that AI doesn’t have any specific plans to usurp my job. But it does have an idea of how the work of security teams may shift in the near future. In human language, we need to be prepared to address:
- Different forms of attacks
- How to use AI as an imperfect tool
- The expertise required to wield it
The big question remains: won’t artificial intelligence develop to deal with these issues, making my role as cyber security professional redundant?
The AI expertise security teams need to develop
I have long thought that AI was at the level where it could reliably drive cars, spot dangers we are too preoccupied to spot, and in response take action to protect us. But while I was pondering the logistics of lorry drivers being quietly overtaken by driverless freight vehicles, AI was quietly making headway in cyber security.
So, will the AI revolution cost cyber security careers, or is it going to free us all in ways we can hardly imagine?
From where I stand, while I can see AI enhancing the role of the security consultant and the penetration tester, it cannot entirely replace them. Consider:
- The number of cyber security job vacancies actually increased by 350% between 2013 and 2021.[1]
- The most likely type of job to be replaced by automated processes are those consisting of ‘routine activities and physical labour’, with 25% of those jobs classed as at high risk.[2]
This will be the case for many other professions, including doctors, teachers, and lawyers. Overall, ONS statistics reveal that the number of UK jobs at ‘high risk’ of automation actually decreased between 2011 and 2017, as professionals learned how to integrate new tools and models into their workflow.[3] A Statista survey from 2021 had 35.9% of global respondents reporting using high levels of automation in security operations and event/alert testing. That data is from well before the explosion in new large language models (LLMs) from all the major tech players, includingGoogle Bard to Bing Chat and, soon, Meta’s LLaMA. I foresee a proliferation of fresh AI derived products and services, however there will be many companies who solidly stick with the old ways of doing things and they will likely be left behind.
Suggested reading: Our blog on privacy and AI outlines how you can use AI to manage data and remain compliant with data regulations
The human response: continually developing your team’s expertise
This means that AI-powered professionals are likely to become more specialised, rather than less. I believe that the biggest change overall will be the additional ‘AI gap’ between certain manual processes that can be automated through AI, and specialist skills that can be augmented with AI. In relation to cyber security, the advent and subsequent evolution of AI means we will need to develop new capabilities to deal with new threats. Our experiences demonstrate that gaining a full overview of your emerging cyber security landscape is essential.
Some of the potential problems ChatGPT outlined for us which I referenced at the start of this article relate to malicious use of AI, including malware that is:
- better at evading security teams
- better at fooling people
- better at convincing you and I to part with our personal information and money.
Cyber security experts will need to develop good AI to fight the AI that is discriminating, autonomously attacking, and breaching our privacy.
So, perhaps the bad news is that AI could infringe on your current job – but it also means that tomorrow’s cyber security jobs will be much more interesting.
Prepare for tomorrow’s work, today
At Reliance Cyber, we are experts in getting ahead of the curve. Talk to one of our team to discover how we can help you create a robust security stance now and for the future.
[1] Cybersecurity Jobs Report: 3.5 Million Openings In 2025
[2] Rise Of Robots – Jobs Lost to Automation Statistics in 2023
[3] Which occupations are at highest risk of being automated? – Office for National Statistics