Security experts discuss the YouTube CEO deepfake

image

The image of YouTube CEO Neal Mohan has been leveraged in a that deploys AI-generated algorithmic video of the CEO to targeted content creators. These deepfakes are sent as personal videos to targets in an effort to place malware, steal credentials or obtain a scam. &nbsp,

Goals are sent an contact, appearing to emerge from an established YouTube contact address, prompting them to see a personal video that has been sent to them. The movie features a algorithmic of Mohan, which properly impersonates his voice, appearance, and also mannerisms. Targets are prompted to click a link and input their credentials to confirm updated YouTube Partner Program ( YPP ) terms so they can continue to access all features and monetize their content. This allows the malicious players to grab the users credentials. &nbsp,

Above, security experts share their perspectives on this fraud. &nbsp,

Safety officials weigh in

Nicole Carignan, Senior Vice President, Security &amp, AI Strategy, and Field CISO at Darktrace:

The ability for attackers to use conceptual AI to produce algorithmic audio, imagery and video is a growing concern, as attackers are increasingly using deepfakes to commence complex . While the use of AI for algorithmic technology is now very true, the risk of image and media manipulation is not new. The problem now is that AI can be used to reduce the ability barrier to entry and speed up production to a higher value. Since the style of deepfakes are getting harder to detect, it is important to turn to AI-augmented tools for identification as people alone cannot be the last line of defense.

As threat actors adopt new techniques, traditional approaches to cybersecurity fall short. To combat emerging challenges from AI-driven attacks, organizations must leverage AI-powered tools that can provide granular real-time environment visibility and alerting to augment security teams. Where appropriate, organizations should get ahead of new threats by integrating machine-driven response, either in autonomous or human-in-the loop modes, to accelerate security team response. Through this approach, the adoption of AI technologies — such as solutions with anomaly-based detection capabilities that can detect and respond to never-before-seen threats — can be instrumental in keeping organizations secure. &nbsp,

J Stephen Kowski, Field CTO SlashNext Email Security+:

Generative AI and LLMs are enabling attackers to create more convincing phishing emails, deepfakes and automated attack scripts at scale. These technologies allow cybercriminals to personalize social engineering attempts and rapidly adapt their tactics, making traditional defenses less effective. What used to be zero-day are now zero-hour at least. Human defenders alone won’t be able to keep up.

To counter AI-generated attacks, organizations should deploy security solutions that leverage generative AI and use machine learning to detect anomalies in email content, sender behavior, and communication patterns. Implementing advanced anti-phishing technology that can identify and block sophisticated impersonation attempts in real-time is crucial for defending against these evolving threats. They should implement advanced email security with AI-powered threat detection, enable ( MFA ), and conduct regular security awareness training for employees. Leveraging real-time phishing protection that analyzes URLs and attachments can also significantly reduce deepfakes and other email-based threats.

James Scobey, Chief Information Security Officer at Keeper Security: &nbsp,

Traditional identity threats to human users continue to evolve. Phishing attacks are becoming increasingly more targeted, using highly personalized tactics driven by social engineering and AI-enhanced data scraping. Cybercriminals are not only relying on stolen credentials, but also on social manipulation, to breach identity protections. Deepfakes are a particular concern in this area, as AI models make these attack methods faster, cheaper and more convincing. &nbsp, As attackers grow more sophisticated, the need for stronger, more dynamic identity verification methods — such as MFA and — will be critical to defend against these increasingly nuanced threats.

Generative AI will play a dual role in the identity threat landscape this year. On one side, it will empower attackers to create more sophisticated deepfakes — whether through text, voice or visual manipulation — that can convincingly mimic real individuals. These AI-driven impersonations are poised to undermine traditional security measures, such as voice biometrics or facial recognition, which have long been staples in identity verification. Employees will, more and more frequently, get video and voice calls from senior leaders in their organization, telling them to grant access to protected resources rapidly. &nbsp, As these deepfakes become harder to distinguish from reality, they will be used to bypass even the most advanced security systems.

Gabrielle Hempel, Security Operations Strategist at Exabeam: &nbsp,

A lot of the early we have seen involved audio impersonation only or manipulated footage that already existed. This is a worrying development because it involves a fabricated video that is pretty convincing and really shows the lengths to which people are going to make phishing more effective. &nbsp,

Looking for inconsistencies in quality still seems to be the most effective way to spot deepfakes, although this is becoming harder as the technology gets better as well. Unnatural facial movements, words not matching the mouth, and background glitches are usually tell-tale signs. &nbsp,

The barrier to accessing tools that allow for sophisticated attacks like these is becoming so low. It is both easy and affordable to do this, which makes it fair game for really anyone. Detection is really struggling to keep up with these attacks. There’s no great solution that will do it without human eyes on the footage, and even that is becoming less reliable.

DNS checker

Leave a Comment