It’s no secret that cyberattackers have considerably upped their game by using artificial knowledge to create phishing messages that look more genuine and have fewer mistakes. But that’s not all they’re doing with AI.
Google recently revealed that state-sponsored advanced persistent threat ( APT ) groups are using the company’s Gemini AI assistant to get help with coding for developing tools and scripts,  , perform research on publicly disclosed vulnerabilities,  , search for explanations of technologies,  , find details on target organizations,  , and search for methods to invade compromised networks.
Cyberattackers have the lead in using AI properly … for now
Of course, cyberattackers are also using conceptual AI to create better phishing messages more quickly. That could explain why the Acronis Threat Research Unit ( TRU) found that the number of email-based attacks rose by almost 200 % from the second half of 2023 to the second half of 2024. The most common harm vector? Phishing, which accounts for three out of four problems.
Combine those problems with challenges based on deepfakes and even the possible toxicity of AI models, and it’s fair to say that conceptual AI has thus far been a mixed bag at best from a security standpoint.
Whatever the beneficial effects of conceptual AI may be, the disadvantages of the rapidly evolving technologies are also important. In fact, scammers have an advantage over many security companies in the efficient and effective usage of AI. But not for much.
Security vendors are using Artificial to strike up
The security community is rushing to catch up with malicious groups and other risk actors, and the area is succeeding. Alternative developers have been using AI for nearly a century for purposes such as detecting unknown variations of ransomware samples, and more recently, with the advent of relational AI, cybersecurity vendors have found new ways to use AI to fight AI.
One developing technology is AI-powered bots linked to cybersecurity programs such as terminal detection and response ( EDR). These developing machines can offer clear, easily understandable theories of security incidents that are free of technical jargon and other barriers to knowledge.
With a clear picture of what caused an incident and what kind of impact the incident had, users who aren’t IT experts can better understand how to avoid an attempted breach in the future and take steps to make their systems safer.
And there’s more. Vendors are building capabilities into solutions that assist with such critical tasks as threat hunting and remediation. Analysts can use that information to make better informed decisions more rapidly and devise effective strategies to mitigate and remediate security incidents quickly.
AI can help automate security and management processes by performing a semantic search of past support tickets that feature descriptions that are similar to new tickets. An AI assistant can recommend resolutions based on tickets that have already been resolved.
AI can also group similar issues automatically and provide root cause analysis. Furthermore, vendors can use it to identify the main reason for time-consuming issues and recommend a fix.
AI-based script generation could level the cybersecurity playing field
And then there is AI-based script generation, an evolving capability that offers the promise of reducing the need for manual input as well as the need to find skilled engineers. It also has the potential to minimize the chances of human error and accelerate the process of developing scripts.
Organizations will be able to use AI-based scripting to manage users and systems, and to automate software installation to perform remediation steps.
AI can also enable organizations to standardize configuration across thousands of client workloads and automate security configuration. The result is cybersecurity capabilities that require less development time and less expensive expertise to put into action.
AI-based script generation is particularly useful because it has the potential to deliver powerful capabilities to users of all technical skill levels. As scripting tools evolve, even relative novices will be able to input requirements and receive ready-to-use scripts.
AI also has the power to enable enhancement of preexisting scripts with additional instructions and in-line comments for script readability. Integration with EDR can facilitate the instant creation of incident remediation scripts in response to security threats.
The AI cybersecurity battle will continue
The advent of generative AI has introduced new elements of risk to all users of technology. Even AI platforms themselves .
But with rapid innovation, response to user feedback and a keen understanding of how AI can serve as a tool for protection, security vendors will continue seeking to gain the upper hand on cyberattackers who rely on AI for their criminal activities.
About TRU
The Acronis Threat Research Unit ( TRU) is a team of cybersecurity experts specializing in threat intelligence, AI and risk management. The TRU team researches emerging threats, provides security insights, and supports IT teams with guidelines, incident response and educational workshops.
Sponsored and written by .