IT leaders are worried about the skyrocketing costs of cybersecurity tools packed with AI features. Interestingly, hackers seem to be avoiding AI for now. There’s little chatter on cybercrime forums about how they might use it.
A recent survey of 400 IT security decision-makers by Sophos shows that 80% expect generative AI to drive up security tool costs. Gartner backs this, estimating global tech spending will climb almost 10% this year, primarily due to AI infrastructure upgrades. The Sophos survey revealed that while 99% of organizations want AI capabilities in their cybersecurity platforms, only 20% view it as the main reason for adoption. This lack of clarity raises questions about whether AI is essential for security.
Three-quarters of those surveyed struggle to measure the added costs of AI features in their tools. For example, Microsoft raised the price of Office 365 by 45% after integrating Copilot. However, 87% are optimistic that efficiencies from AI will outweigh these costs, which might explain why 65% have already incorporated AI in their security solutions. The introduction of budget-friendly AI models like DeepSeek R1 sparks hopes for lower prices across the industry.
But cost isn’t the only issue. A significant 84% of security leaders fear that unrealistic expectations for AI tools might lead to job cuts. Even more, 89% worry that flaws in AI could introduce new security risks. Sophos warns that “garbage in, garbage out” aptly describes potential cybersecurity dangers from poorly designed AI systems.
On the flip side, research from Sophos indicates that cybercriminals aren’t using AI as much as predicted. A deep dive into underground forums found under 150 posts discussing GPTs and large language models over the past year. In contrast, posts about cryptocurrency surged over 1,000, and network access discussions topped 600. Most cybercriminals don’t seem interested in generative AI, with no strong evidence of its use for devising new exploits or malware.
One Russian-language crime forum has featured a specific AI section since 2019, but it only boasts about 300 threads, while related malware and access topics have 700 and 1,700, respectively. There’s potential for growth in AI discussions—considering the technology’s recent surge in popularity.
Some forum users admit to chatting with AI for social interaction rather than malicious intent. Others warn that doing so complicates operational security, showing a collective skepticism about the technology.
When hackers do bring AI into play, it’s mainly for spamming, intelligence gathering, and social engineering—like generating phishing emails. In fact, Vipre reported a 20% surge in business email compromise attacks in Q2 2024, with AI involved in two-fifths of those attacks.
Discussions about “jailbreaking” AI models to bypass security measures are common. Cybercriminals have developed malicious chatbots for their operations since 2023, with tools like WormGPT and newer options like GhostGPT entering the scene. Sophos found only a handful of low-quality attempts to create malware or exploit tools using AI on forums. For instance, in June, HP intercepted a malware campaign that likely used generative AI for its scripting.
Comments on AI-generated code often lean toward sarcasm, with users mocking ineffective attempts. The chatter suggests that many see AI for malware creation as a crutch for less skilled individuals. However, some users express aspirations for using AI in more innovative ways, such as automated malware, signaling a growing interest in what’s possible down the line. For now, though, most cybercriminals seem to stick to using AI for simpler tasks.