IT officials are concerned about the skyrocketing costs of AI-enhanced cyber security devices. In contrast, hackers are mostly eschewing AI because there aren’t many discussions about how to use it in cyber crime forums.
According to a survey conducted by security company Sophos among 400 IT security decision-makers, 80 % of respondents think conceptual AI will significantly lower the . This tracks with individual Gartner study that predicts international technology spend to rise by about 10 % this year, mainly due to ,  , upgrades.
According to Sophos research, 99 % of organizations list AI features on the list of requirements for cyber security platforms, with the most popular use being to increase safety. However, only 20 % of respondents cited this as their primary cause, indicating a lack of consensus on the necessity of AI resources in safety.
Three-quarters of the leaders claimed it is difficult to calculate the extra cost of AI functions in their protection tools. For instance, Microsoft controversially increased the price of ,  , by 45 % this month due to the inclusion of , .
On the other hand, 87 % of respondents think that the cost savings from AI-related efficiency outweigh the extra expense, which may explain why 65 % of respondents have already chosen security solutions with AI. The release of low-cost , AI type DeepSeek R1 , has generated hopes that the price of AI tools will immediately lower across the board.
Notice:  , HackerOne: 48 % of Security Professionals Believe AI Is Dangerous
But price isn’t the only issue highlighted by Sophos ‘ experts. A substantial 84 % of security officials worry that high aspirations for AI tools ‘ skills will make it difficult for them to reduce the number of people on their team. 89 % of people are concerned that security threats could be brought on by flaws in the tools ‘ AI capabilities.
The Axiom researchers warned that “poor quality and poorly implemented AI models can unwittingly introduce significant cybersecurity risk of their own,” and that the adage “gag in, garbage out” is especially applicable to AI.
Cyber criminals don’t employ It as frequently as you might think.
According to independent studies from Sophos, protection concerns may be preventing cyber criminals from adopting AI as much as they hoped. Despite , researcher projections, the researchers found that AI is not yet widely used in attacks. To determine the , predominance of AI usage , within the hacking community, Eset examined articles on underwater forums.
Less than 150 articles about GPTs or significant language models were identified by the researchers last year. On a larger scale, they discovered more than 1, 000 posts about bitcoin and more than 600 threads about selling and buying network access.
The majority of the threat actors on the crime forums we studied “don’t seem to be particularly enthusiastic or enthusiastic about relational AI, and we found no evidence of fraudsters using it to create new exploits or malware,” according to Sophos researchers.
One Russian-language violence site has had a devoted AI region since 2019, but it only has 300 fibers compared to more than 700 and 1, 700 fibers in the malware and network access areas, both. The researchers did point out that this could be considered “relatively quick development for a topic that has only gained traction in the last two years.”
However, a customer admitted to speaking to a GPT for social factors in one article rather than to launch a cyberattack. Another user replied it is “bad for your opsec]operational safety ]”, more highlighting the group’s lack of trust in the technology.
Hackers are using AI for spamming, gathering knowledge, and social engineering
Articles and threads that mention AI apply it to methods such as spamming, open-source intelligence gathering, and social engineering, the latter includes the use of GPTs to , make hacking emails , and email texts.
In contrast to the same time in 2023, business email bargain attacks increased by 20 % in the second quarter of that year, according to security firm Vipre. AI was also at fault for two-fifths of those Standard attacks.
Other articles focus on , “jailbreaking” , , where versions are instructed to pass protection with a properly constructed fast. Since 2023, there have been numerous mafic ai made especially for crime. While types like ,  , have been in use, newer ones like as ,  , are also emerging.
Only a few “primitive and low-quality” tries to make malware, attack tools, and exploits using AI were spotted by Sophos research on the forums. Similar incidents are not uncommon; in June, HP intercepted an email campaign that was spreading malware in the wild with a script that “was highly likely to have been written with the aid of GenAI.”
Conversations about AI-generated code frequently included sarcasm or criticism. For example, on a post containing allegedly hand-written code, one user responded,” Is this written with ChatGPT or something…this code plainly won’t work”. According to Sophos researchers, there is generally agreement that using AI to create malware was intended for “lazy and/or low-skilled individuals looking for shortcuts.”
Interestingly, some posts mentioned creating AI-enabled malware in an aspirational way, indicating that, once the technology becomes available, they would like to use it in attacks. The world’s first AI-powered autonomous C2 was acknowledged in a post titled” This is still just a product of my imagination for now.”
According to the researchers,” some users are also automating routine tasks.” However, it seems that the majority of people don’t rely on it for anything more complicated.