Episode #
118

Trend AI's Robert McArdle on Criminal Business Models Surviving Tech Revolutions

Show notes: 

After 18 years tracking cybercriminal operations at Trend AI, Robert McArdle, Director of Cybercrime Research, has developed a framework for predicting how threat actors adopt new technology: the answer consistently comes down to economics, not capability. He breaks down three rules of thumb his team uses: criminals want an easy life, any new technology must beat the ROI of their current model, and cybercrime is evolutionary rather than revolutionary. Those rules explain why ransomware has actually slowed the adoption of new attack methods and why the lowering technical barrier for attackers creates an asymmetric burden on defenders, who must demonstrate value to an employer rather than simply make a profit.

Robert goes deep on where agentic AI is headed for both offense and defense, including a sobering implication for law enforcement; as criminal operations become increasingly automated, arresting the principals may no longer disrupt the business. His team has already put this to work on the defensive side. Their internal agentic system ACER has discovered 210 zero-days in a matter of months. He also raises a specific concern that practitioners should take seriously: CTI reports containing detailed reverse-engineering write-ups and code samples are essentially training data for malicious LLM prompting, and the industry should reconsider what level of technical detail is actually necessary to publish alongside IOCs.

Topics discussed:

  • The three-rule framework for predicting criminal adoption of emerging technology
  • How the lowering technical barrier for entry shifts the entire cybercriminal bell curve upward 
  • Why embedding AI directly into malware remains rare below 1% of observed cases, and the two structural reasons that limit adoption 
  • The shift toward jailbreaking non-Western LLMs as criminal operators anticipate that law enforcement coordination is effectively nonexistent
  • How agentic AI transforms criminal business models from linear service stacks to exponentially scalable operations 
  • The emerging law enforcement challenge when operations are ~75% autonomous, arrests no longer constitute meaningful disruption 
  • Why CTI publishing norms need to evolve, specifically how detailed code samples and reverse-engineering screenshots in APT reports can be fed directly into LLMs to accelerate malware development
  • Practical defensive posture for shadow AI proliferation: treat AI-powered tools as untrusted software under existing vulnerability management frameworks

Key Takeaways: 

  • When assessing whether adversaries will adopt a new technique or tool, evaluate it through three lenses: ease of operation, return on investment versus current methods, and evolutionary fit with existing business models.
  • Before publishing detailed reverse-engineering write-ups, code samples, or pseudocode in APT reports, assess whether that level of detail serves defender use cases or primarily serves as a development accelerant for threat actors. 
  • Audit your organization's shadow AI exposure as a software risk problem, not an AI problem. 
  • Structure specialist agents to handle discrete tasks rather than relying on a single broad LLM. 
  • Pressure-test your law enforcement response playbook against autonomous criminal infrastructure. 
  • Evaluate your AI security tooling for hallucination risk in detection workflows.  
  • Model romance scam and investment fraud at scale in your threat landscape. 
  • Monitor for jailbroken non-Western LLM wrappers in criminal marketplaces. 
  • Factor defender tooling complexity into hiring and onboarding benchmarks. 
  • Track zero-day discovery velocity as a benchmark for agentic security ROI. 

Listen to more episodes: 

Apple 

Spotify 

YouTube

Website