You AI Early Warning Systems Reboot the Intel Industry’s Threat?

image

Due to sour data, limited data sharing, and high costs associated with conventional detection and response tools, the computer threat intelligence industry has struggled to become a big market category.

However, artificial intelligence ( AI ) may be able to alter that. Tech giants are slowly transforming into early warning systems by using AI to track malicious actors — maybe lower to the individual level — before they launch trojan campaigns. &nbsp,

These companies are detecting clean, actionable intelligence in real time by checking attempts to misuse their platforms, giving a glimpse of how AI-driven platforms might finally deliver the fast, cost-effective danger detection the cybersecurity industry has been looking for for years.

Google Threat Intelligence Group ( GTIG ) recently shared information on how it discovered nation-state hackers linked to Russia, Iran, China, North Korea, and Russia attempting to use its Gemini gen-AI tool for tasks ranging from drafting malicious scripts intended to bypass corporate security measures. &nbsp,

According to Google, Iranian hackers working for the Iranian government were among the heaviest Gemini users, looking into vulnerabilities and testing phishing strategies to deal both government and defense. Foreign groups, including many PRC-backed APTs, also leveraged the Artificial model for scripting tasks, Active Directory maneuvers, and covert lateral movement within goal networks. It was used by North Korean hackers to look for free hosting companies, write malicious code, and write cover letters and job descriptions to enlist hacked IT professionals inside European businesses. &nbsp,

By watching Gemini’s questions, Google boasted that it can predict an attacker’s next actions, an edge that effectively turns the program into an early-warning program for digital campaigns. Additionally, it places AI companies in a new position: controlling how and why their technology is used, with possible legal and moral issues still unanswered. &nbsp,

Similar to Google, software giant Microsoft is promoting its ability to use OpenAI’s ChatGPT to manage targeted reconnaissance, malware creation, and malignant vulnerability research tasks.

In one instance, Redmond’s threat-hunters saw the Russian APT known as Forest Blizzard ( ) using LLMs to conduct research into various satellite and radar technologies that might be relevant to Ukrainian conventional military operations as well as general research designed to support their cyber operations.

Advertisement. Scroll to remain reading.

Microsoft claimed to have discovered the notorious North Korean APT Emerald Sleet ( also known as Kimsuky ) using LLMs to generate content that was likely to be used in spear-phishing campaigns in another incident. &nbsp, In contrast, the Pyongyang attackers were caught using LLMs to comprehend publicly known risks, to troubleshoot technological issues, and for support with using different web technologies.

OpenAI, too, has publicly shared stories of catching Egyptian APTs planning ICS problems and disrupting more than 20 digital and secret nation-state control activities.

With these first success stories, there’s a general sense that’ Big AI ‘ might be the game-changer for threat-intelligence. The company may symbol those queries in real time and help set traps for malware campaigns if expert hackers are using a new phishing scheme through ChatGPT or Google’s Gemini. &nbsp,

There’s also an element of real-time espionage at play: AI platforms learn how multiple campaigns connect, which malicious tools get repeated, and how often threat actors pivot to new malicious infrastructure and domains. When the data is available in real time, that kind of cross-campaign insight is invaluable for defenders.

Of course, adversaries won’t line up to feed their best secrets to OpenAI, Microsoft or Google AI platforms. Some hacker groups prefer open-source models, hosting them on private servers where there’s zero chance of being monitored. Criminals can test or refine their attacks as these open-source models advance, but the lure of advanced online models with potent capabilities will be difficult to ignore.

There are conflicting interests at play, despite security experts ‘ persistent support for AI’s ability to save threat intelligence. &nbsp, Some warn that attackers can poison AI systems, manipulate data to produce false negatives, or exploit generative models for their own malicious scripts. &nbsp,

However, as of right now, the big AI platforms are already receiving more malicious signals per day than any single cybersecurity provider. That scale is exactly what’s been missing from threat intelligence. For all the talk about” community sharing” and open exchanges, it’s always been a tangled mess. However, if these AI powerhouses act as near-instant radars and send actionable intelligence to defenders, we could see a leap forward where attacks are intercepted right away and for less money than what defenders are used to spend on legacy detection tools.

It’s never wise to suggest that one technology can save a whole market category, but this AI-driven early warning approach has the potential to resurrect threat intelligence. Will the threat actors simply adapt more quickly, and will industry and government support it? &nbsp,

Many people are closely watching to see if AI can finally accomplish a long-awaited goal.

Related: Mastercard to Acquire Threat Intelligence Firm Recorded Future for$ 2.6 Billion

Related:

Related:

Related:

Related:

DNS checker

Leave a Comment