China’s DeepSeek’s service in Italy has been by Italy’s data protection watchdog due to a lack of information regarding how users ‘ personal data is used.
The Garante, the power, made the announcement a day after DeepSeek received a number of inquiries regarding its data management practices and the location of its training data.
In particular, it wanted to know what personal data is collected by its web platform and mobile app, from which resources, for what reasons, on what legal basis, and whether it is stored in China.
The Garante claimed in a statement released on January 30, 2025, that it made the decision based on information it claimed was” fully insufficient” provided by DeepSeek.
The companies behind the service, Hangzhou DeepSeek Artificial Intelligence, and Beijing DeepSeek Artificial Intelligence, have “declared that they do not operate in Italy and that Western policy does not apply to them”, it added.
As a result, the regulator said it’s blocking exposure to DeepSeek with quick result, and that it’s together opening a spacecraft.
The data security expert also temporarily outlawed OpenAI’s ChatGPT in 2023, but it was as a result of the AI company’s intervention to address the concerns raised about data protection. Consequently, OpenAI was over how it handled personal information.
DeepSeek’s restrictions comes as the company has been experiencing a surge of popularity this year, with millions of users signing up for the service and putting its mobile applications at the top of the download charts.
Besides becoming the target of “large-scale harmful attacks”, it has drawn the attention of lawmakers and regulars for its protection plan, China-aligned repression, advertising, and the national security concerns it may cause. As of January 31 the business has started a resolve to combat the providers attacks.
Adding to the challenges, DeepSeek’s large language models ( LLM) have been found to be to like Crescendo, Bad Likert Judge, Deceptive Delight, Do Anything Now ( DAN), and EvilBOT, thereby allowing bad actors to generate malicious or prohibited content.
According to Palo Alto Networks Unit 42,” they elicited a range of hazardous outcomes, from detailed instructions for creating dangerous items like Molotov cocktails to generating malicious code for attacks like SQL injection and lateral motion,” according to a Thursday statement.
” While DeepSeek’s initial responses often appeared benign, in many cases, carefully crafted follow-up prompts often exposed the weakness of these initial safeguards. The LLM’s highly detailed malicious instructions were readily available, demonstrating the possibility for these ostensibly innocent models to be used for evil purposes.
Further evaluation of DeepSeek’s reasoning model, DeepSeek-R1, by AI security company HiddenLayer, has that it’s not only vulnerable to prompt injections but also that its Chain-of-Thought ( CoT ) reasoning can lead to inadvertent information leakage.
In an intriguing twist, the business claimed that the model “reported numerous instances where OpenAI data was incorporated, raising questions about data sourcing and model originality.”
The disclosure comes in the wake of the discovery of a jailbreak vulnerability in OpenAI ChatGPT-4o dubbed Time Bandit that allows an attacker to circumvent the LLM’s safety guardrails by asking the chatbot questions in a way that causes it to lose its temporal awareness. OpenAI has since mitigated the problem.
The CERT Coordination Center ( CERT/CC ) claims that an attacker can exploit the vulnerability by initiating a session with ChatGPT and asking it to respond directly to a specific historical event, historical time period, or by pretending to be a user’s agent for a particular historical event.
” Once this has been established, the user can pivot the received responses to various illicit topics through subsequent prompts.”
Similar jailbreak vulnerabilities have been found in Git Hub’s Copilot coding assistant and , both of which allow threat actors to bypass security restrictions and write harmful code without the need for words like” sure” in the prompt.
According to Apex researcher Oren Saban,” Starting queries with affirmative words like” Sure” or other forms of confirmation acts as a trigger, shifting Copilot into a more compliant and risk-prone mode” by using affirmative words like” Sure” or other forms of confirmation.” Just a small adjustment, according to the author, will suffice to get responses that range from unethical to dangerous advice.
Apex claimed it also discovered a new vulnerability in Copilot’s proxy configuration that could be exploited to completely circumvent access restrictions without paying for usage or even tamper with the Copilot system prompt, which serves as the model’s fundamental instructions.
However, the attack relies on capturing an authentication token associated with a functioning Copilot license, which prompts GitHub to declare it an abuse issue following responsible disclosure.
The GitHub Copilot proxy bypass and positive affirmation jailbreak serve as excellent examples of how even the most powerful AI tools can be abused without adequate safeguards, Saban continued.
Found this article interesting? Follow us on and Twitter to access more exclusive content.