A ,  , on a UK-based energy firm , used AI to imitate the CEO’s voice , and tricked a worker into transferring$ 243, 000 to a fraudulent account in 2019. A , computer spy campaign , in 2021 targeted international telecoms firms with AI-generated , phishing messages. And last year, hackers using AI injected fake movie flows into the genetic confirmation process of crypto change Bitfinex, ultimately ,  , themselves$ 150 million worth of online property.
With every passing day, AI-enabled cyberattacks are just getting more sophisticated and deceptive.
The good news? The power of Artificial cuts both ways, and an increasing number of businesses are exploring opportunities to build AI ( and its branch,  , ) in their own cyber threats – fighting fire with fire, you might say.
Today, more than two-thirds ( 69 % ) of enterprises believe AI is necessary for cybersecurity because threats are rising to levels beyond the capacity of cyber analysts,  , .
While the ,  , is still in its relative infancy, the potential benefits are clear: AI has the ability to process vast amounts of data, recognize patterns quickly, and make informed decisions, helping organizations identify vulnerabilities and threats, minimize or eliminate threats, and respond more quickly, says Maria Schwenger, co-chair of AI Governance and Compliance Initiatives at the Cloud Security Alliance ( CSA ). ” AI – and GenAI – are not just helping us defend cybersecurity”, she says. ” They’re helping us establish a new, adaptable world with new, resilient systems”.
Researchers believe that the following places hold great promise as organizations begin to look into AI in security programs.
1. How AI security tools can enhance risk tests
Software engineers work to create stable script, but occasionally mistakes do occur. They might , mistakenly introduce vulnerabilities , by using illegal error handling or not validating user inputs, complex systems might make it challenging for them to anticipate all possible security vulnerabilities, or software engineers might face tight deadlines to deliver new features immediately, leading to shortcuts or compromises in code quality and security.
Software engineers are remarkably ignorant of safety and how to identify vulnerabilities in the technology they write, according to Nick Merrill, research scientist and director of the Daylight Lab at the UC Berkeley Center for Long-Term Cybersecurity.
Usually, when risks are reported in the wild, designers are responsible for finding the insects and patching them. They must navigate through numerous files and modules to find the underlying cause of a bug, or to recreate the specific circumstances or scenarios to comprehend the bug and find a solution. This can be hard and boring.  ,
Organizations could, nevertheless, use AI to increase the efficiency and speed at which they can identify and fix potential code vulnerabilities, making a more stable environment, according to Merrill.
For instance, AI-powered tools could scan through codebases to look for possible vulnerabilities by examining patterns to look for popular risks like cross-site scripting and SQL injections. AIs could also be trained on huge sets of known risks to find patterns similar to those found in fresh code, revealing earlier undiscovered flaws or zero-day exploits.
This saves time and effort because security teams don’t have to wait until after something has been reported in the wild to find the bug and fix it, he claims. ” Honoring developers to solve security issues would be a huge win these days,” said one author.
2. How AI cybersecurity tools will aid in threat identification
Identifying potential security threats at an early stage ,  , and unauthorized access to intellectual property and protect valuable assets that make up an organization’s” crown jewels”. This helps organizations avoid costly data breaches, financial losses, and reputational damage.
Security analysts are in charge of manually monitoring system logs, network traffic logs, and application logs for suspicious activity that might indicate a security breach at many organizations. This process can be time-consuming and straining on individuals, CSA’s Schwenger says. It can be challenging to spot threats quickly, especially if they are very sophisticated and difficult for a human eye to miss,” she says. ” With human analysts, a person can process only so much data, and it’s easy to miss certain patterns. However, AI is extremely adept at locating patterns that we might miss.
Because AI can analyze vast amounts of data, it can be used to establish a baseline of normal behavior for systems, networks, and users. By detecting deviations or anomalies, AI can help to identify potential security threats, such as unauthorized access attempts, unusual network traffic, or abnormal user behavior, Schwenger says.
Because it can find those undiscovered parameters and hidden anomalies in the data more quickly, which might otherwise have been overlooked, she says, making threat analysis a significant improvement. ” This gives you scalability because you ‘re ,  , and getting real-time information that you can pass to your security engineers, which helps you work faster and be more agile”.
3. How AI cybersecurity tools can contain and contain threats more effectively
Moving quickly to correct a security incident or threat has been detected. ” It’s all about speed when it comes to threats, compromises, breaches, and , “, says Adam Levin, author of , : How to Protect Yourself in a World Filled With Scammers, Phishers and Identity Thieves, and co-host of the ,  , podcast. You must be able to move as quickly as possible to plug the gap and stop the issue so that you can begin developing the solution. The more effectively you can contain the threat, the better prepared you are for it.
Traditional approaches to contain and contain threats rely heavily on manual intervention. When a security incident occurs, for example, analysts must manually identify the affected systems, isolate compromised assets, and implement containment measures. Security analysts will manually examine the scope of the incident, work with compromised credentials, and repair systems. These procedures take time, and they also offer the potential for human error, which could cause resolutions to take longer.
With AI, however, algorithms can automatically assess the severity and impact of the threat, identify which assets are impacted, and even orchestrate response actions, Schwenger says. This includes a number of ,  , tasks that support better endpoint security, including isolating infected endpoints, blocking malicious traffic, or turning off compromised services.
Because AI can provide you with these insights and make recommendations, she says, “it really helps support your security teams to make informed decisions and .” ” And in the future, there ‘s , , too, which could be used to generate reports and summaries after an incident, prepare answers, and help keep stakeholders informed”.
Schwenger is quick to point out the enduring need and value of humans in any security program, despite the potential of AI in security being significant. ” AI is only as good as the data it’s based on, trained on, and analyzing. Nothing can replace the human expertise and oversight, which is something that will always be needed”, she says.
This article was written by Kristin Burnham and first appeared in Focal Point magazine.