How AI-driven Threat Detection is Reshaping Threat Intelligence
Aamir Lakhani, the Global Director of Threat Intelligence & Adversarial AI Research at Fortinet, joins Will Baxter for this week’s Future of Threat Intelligence podcast. Fortinet processes telemetry from 50% of the next-generation firewall market, giving Lakhani and his team unique insights into the looming shifts in the cybercrime landscape.
In a fascinating discussion, Lakhani discusses how AI is changing the cybercrime landscape, where security teams should focus, and what he believes the future of threat intelligence is looking like.
How is AI changing the cybercrime landscape?
The cybercrime landscape is large, ranging from low-sophistications attackers who use pre-packaged tools to advanced persistent threat (APT) groups. This range in sophistication meant that certain groups were significantly more dangerous than others. As such, organizations could tailor their defensive strategies.
However, AI acts a level up. Even low-skilled attackers can leverage AI to upskill their operations. This upskilling has also changed the way that attackers leverage AI.
“We first started seeing genAI just a few years ago being used just to help with phishing emails,” Lakhani says. “Now we're seeing things like code being rewritten, like old attacks that are being kind of repackaged and rewritten. We're seeing more automation and attacks as well.”
This type of change puts the onus on defenders, as entire attack surface shifts and becomes larger. For instance, Lakhani notes that currently there are comparatively small numbers of CVEs being used in breaches each year, which allows defenders to stay on top of threats. But AI allows for more aggressive, efficient, and fast attacks across a range of vulnerabilities, making it more difficult for defenders to stay up-to-date with threats.
Additionally, attackers are either using jailbroken large language models (LLMs) like WormGPT or FraudGPT, or they are running and training their own LLMs locally. This shift requires a fair amount of sophistication and investment, but Lakhani believes it is already happening among stat-sponsored groups and organized cybercrime groups.
This shift and use of jailbroken or in-house trained LLMs greatly increases attackers’ speed and sophistication, shortening the overall attack life cycle. For example, Lakhani believes that these tools likely get attackers 60% of the way to a successful attack
Where should security teams focus?
Lakhani, in his book “Investigating the Cyber Breach: The Digital Forensics Guide for the Network Engineer” and on the podcast, advocates moving away from an overreliance and focus on indicators of compromise (IOCs) to a more holistic view of the intent of the attack.
“I think sometimes we pay a little too much attention to the things that we can't use. A lot of IOCs that are going in, a lot of bad IP addresses, bad domains, things like that,” says Lakhani. “Not that they're not useful. But a lot of those things we can kind of take care of with automation a lot easier, but it doesn't really help us from an investigation standpoint.”
Instead, Lakhani encourages teams to investigate more big picture items: Did they actually manage to breach the network? What is the intent of the attack? What damage may have been done? What is the best way to recover?
In a perfect world, Lakhani would say teams should use PCAPs for their investigations. But the reality is most teams don’t have the bandwidth to analyze PCAPs to the level of detail necessary to get anything worthwhile out of them. Instead, Lakhani encourages concentrating on items like metadata and NetFlow information that can be married or checked against a robust threat intelligence platform to see what patterns in that data means.
How to lead a technical team
Leading teams in the cybersecurity space is challenging. The stresses are constant, and employees are expected to possess a wide range of hard technical skills as well as softer communication skills. This often requires Lakhani to feel like he’s searching for and recruiting unicorns who are technical, passionate about research, and can also understand business needs and communicate with non-technical people.
Additionally, the nature of the work itself can be a challenge for employees. These roles often require long hours, potential weekend work, burn out, and feelings of not being appreciated by the larger company.
Lakhani has his own share of strategies for helping employees overcome these challenges. Most importantly, he believes that as a leader, he should make a point of valuing people and rewarding employee contributions. Additionally, he encourages time off, forcing personal time off when necessary, and includes time for team-building activities to reduce stress.
Lakhani also has a personal philosophy of staying at the bottom of the leadership ladder and acting as a cheerleader. This way, he can push up the rest of his team to allow them opportunities to shine.
“The more you can have them shine, the more you can get the spotlight on them, the more you can publicize their accomplishments,” says Lakhani, “it will actually better for you in your own personal career growth as well. But at the same time, it's going to make the team better. It's going to make the place you work at much more fun, a lot more exciting, and people are just going to be happy.”
What does the future of threat intelligence look like?
Lakhani believes the future of CTI will be very heavily shaped by the use of AI. In preparation, enterprise CTI teams will need to specifically identify and learn their business specific “pain points.” These are the critical functions or systems that, if compromised, would force an organization to quickly pay an attacker to resolve the issue.
CTI teams need to learn these pain points because, Lakhani says, attackers will become very good in the next three years at leveraging predictive AI and generative AI to identify these pain points themselves. CTI teams need to be ready to defend these critical systems.
In terms of particular attacks, Lakhani believes agentic AI will pose a threat as agents become easily attacked and manipulated. This could be paired with rogue servers leading to more prevalent “runaway AI” attacks. Attackers will likely also use AI for automation purposes, with AI agents digesting information, creating scripts, and running attacks themselves.
Listen to the full episode of the Future of Threat Intelligence Podcast with Aamir Lakhani, Global Director of Threat Intelligence and artificial intelligence at Fortinet HERE
.png)
.png)
.png)
.png)