The Hague , — A global strategy has led to at least 25 detention over child sexual abuse information generated by artificial intelligence and distributed online, Friday.
” Operation Cumberland has been one of the first cases involving stuff, making it extremely difficult for investigators due to the lack of national legislation addressing these crimes”, the Hague-based German police agency said in a statement.
The majority of the arrests were made Wednesday during the world-wide operation led by the Swedish officers, and which also involved law enforcement agencies from the EU, Australia, Britain, Canada and New Zealand. U. S. law enforcement agencies did not take part in the procedure, according to Europol.  ,
It followed the arrest last November of the principal suspect in the case, a Swedish national who ran an online platform where he distributed the AI stuff he produced.
After a” symbolic online payment, people from around the world were able to get a password to access the system and enjoy children being abused”, Europol said.
Online child sexual abuse remains one of the most disturbing manifestations of crime in the European Union, the company warned.
It” continues to be one of the top priorities for law enforcement agencies, which are dealing with an ever-growing level of unlawful content”, it said, adding that more arrests were expected as the investigation continued.
While Europol said Operation Cumberland targeted a system and people sharing content completely created using AI, there has also been a worrying development of AI-manipulated “deepfake” pictures online, which often uses photos of real people, including children, and can have devastating impacts on their lives.  ,  ,
According to a by CBS News ‘ Jim Axelrod in December that focused on one girl who had been targeted for such abuse by a classmate, there were more than 21, 000 deepfake pornographic pictures or videos online during 2023, an increase of more than 460 % over the year prior. The has proliferated on the internet as lawmakers in the U. S. and elsewhere race to catch up with new legislation to address the problem.
Just weeks ago the Senate passed a bipartisan bill called the that, if signed into law, would criminalize the “publication of non-consensual intimate imagery (NCII ), including AI-generated NCII (or” deepfake revenge pornography” ), and requires social media and similar websites to implement procedures to remove such content within 48 hours of notice from a victim”, according to on the U. S. Senate website.
As it stands, some social media platforms have appeared unable or unwilling to crackdown on the spread of sexualized, AI-generated deepfake content, including fake images depicting celebrities. In mid-February, Facebook and Instagram owner Meta said it had removed over a dozen fraudulent sexualized images of famous female actors and athletes after a a high prevalence of AI-manipulated deepfake images on Facebook.  ,
” This is an industry-wide challenge, and we’re continually working to improve our detection and enforcement technology”, Meta spokesperson Erin Logan told CBS News in a statement sent by email at the time.