TL;DR
Ever since OpenAI released ChatGPT in 2022, it seems like you can’t go online without hearing about a new advancement in AI. It hardly matters that, in one form or another, AI technology has been around for years. After being thrust into the mainstream, AI is suddenly everywhere. It’s taken over customer service. It’s conducting job interviews. It’s generating images and it’s writing all the content on the internet.
So it’s only natural to wonder how AI is transforming the cybersecurity industry. Adaptation has long been a part of the cybersecurity profession, whether adjusting to the rise of cloud computing or learning how to counter threats like social engineering attacks. However, the speed at which AI is evolving is enough to create worry. Is this finally a technology we won’t be able to get ahead of?
To find out, I talked with one of our own Cybrary mentors, Rob Goelz. Here’s what he said.
AI has actually been a part of cybersecurity for years.
For anyone who’s been paying attention, the only thing that’s new about AI is all the hype. Goelz made the point that this technology has long been an integral aspect of how cybersecurity is conducted, “It just wasn’t always called AI. It’s pattern recognition, heuristics, whatever you want to call it.” Machine learning is another common alternative term you may know. All these methods are what we now generally refer to as AI.
Goelz pointed out the example of CrowdStrike. Long before anyone was talking about the AI takeover, the company has been employing machine learning to analyze the network traffic of its customers. By learning what normal day-to-day traffic looks like in an environment, AI can identify deviations and flag them for review. “They’re looking for things like, why did this person open 30,000 files?” explained Goelz. “That’s not what they do every day.” But by training their machine model to recognize this, CrowdStrike can automatically mark this event as a likely ransomware attack — all without human intervention.
AI can be even better than the rubber duck on your desk.
We’ve all heard of rubber duck debugging, right? (Or my preferred term: rubberducking.) It’s when you use a rubber duck (or whatever inanimate object or even animal of your choosing) to articulate a problem you haven’t been able to solve. The idea is that, by forcing yourself to explain the code and why it isn’t working, you’ll be able to discover a different perspective that will lead you to a solution.
Goelz made the point that AI can now do all of this, but with one notable exception: “The nice thing about AI is, unlike the rubber duck on your desk, it will answer you.” And while there may still be some significant doubts about the accuracy of the content that LLMs like ChatGPT generate, AI has proven itself particularly helpful for analyzing and identifying issues with basic code. Plug in something you’ve been working on and it will be able to tell you if you’re missing a cycle, for example, or if you need to run a conditional check before going into a code block.
This kind of service can be a wonderful boon to both cybersecurity beginners and experts alike. For those just starting out, using AI like this can help accelerate your learning pathway by pointing out common mistakes and helping you be more proactive about fixing them. As for the more seasoned professionals, Goelz reminded me that, “Even if you're a really seasoned programmer and you write like, you know, red green testing, unit tests, all that other stuff, you still miss stuff.” AI can just be another kind of rubber duck on their desk.
AI will help make us more secure.
Although there has been a lot of fear mongering around how AI will potentially be used to come up with more sophisticated attacks and complex threats, Goelz expressed confidence in just the opposite. Related to the points already made above, he said, “What’s going to happen is we're going to start seeing things get more secure.”
This largely has to do with the concept of shift left, which focuses on moving testing and debugging away from production and closer to development. In other words, by employing AI tools to detect issues earlier on in the lifecycle, you will be better positioned to avoid many of the vulnerabilities that would otherwise leak through to the final product. “I’ve seen this in the application security world, where we're embedding code scanning inside of the development environment,” said Goelz. “Then people go and load up a bunch of dependencies in their programming and it goes, ‘uh-oh, that dependency is going to get you in trouble.’”
Sophisticated cyberattacks may get a lot of attention, said Goelz, but the vast majority of what we deal with everyday are the most common issues, like buffer overflow and data serialization attacks. AI gives us a relatively straightforward solution for preventing these kinds of attacks and, ultimately, strengthening our entire security approach.
AI won’t replace cybersecurity professionals. Probably ever.
If you’ve been wondering when the day will come when a computer can do your job, don’t worry. AI may be here to help make your job easier, but Goelz said it won’t ever be able to fully take over what you do. The main reason for this comes down to the fact that, for all the technology’s strengths, Goelz doesn’t see AI as creative.
“There's too much stuff that needs human interaction and intervention,” he said. “The reality of it is you're never going to be replaced by AI because AI is only going to be as good today as the dataset that it's trained on.”
So while AI may make it wonderful at pulling out patterns or identifying errors, this lack of creativity does not make it particularly useful at considering or even being aware of the edge cases — those much less common but still possible security scenarios that all good professionals need to be able to detect. And if a vulnerability exists in these spaces, someone will be able to find it. Our infinite capacity for creativity makes us much better suited to this task.
There’s also the fact that so much of our cybersecurity jobs aren't actually done in front of a computer. Human interaction is as much a part of the profession as knowing how to set up a proper firewall or defend against a DDoS attack. Knowing how to convince everyone on your network to follow the security protocols you’ve put in place or to turn on two-factor authentication is something that only an actual human will ever be able to do.
AI is here to stay. And that’s okay.
Without a doubt, we’re living in an exciting time for technology. While much of the talk around AI may be hype, a lot of it has potentially transformative implications. The very fact that so many are now familiar with what AI can do and are actively using it in their lives says a lot about what this technology can do. Outside of the experts and science fiction writers, who could have predicted a decade ago that AI would have advanced as far as it has today?
That said, a healthy dose of reality should be taken along with the excitement. AI is not something to be feared or avoided. Instead, by using it as an assistive tool, it can help eliminate repetitive tasks, identify common errors and issues, and increase our overall security strategy. And that is truly something you can get excited about.





