Italy’s data protection watchdog has blocked Chinese artificial intelligence (AI) firm DeepSeek’s service within the country, citing a lack of information on its use of users’ personal data.
The development comes days after the authority, the Garante, sent a series of questions to DeepSeek, asking about its data handling practices and where it obtained its training data.
In particular, it wanted to know what personal data is collected by its web platform and mobile app, from which sources, for what purposes, on what legal basis, and whether it is stored in China.
In a statement issued January 30, 2025, the Garante said it arrived at the decision after DeepSeek provided information that it said was “completely insufficient.”
The entities behind the service, Hangzhou DeepSeek Artificial Intelligence, and Beijing DeepSeek Artificial Intelligence, have “declared that they do not operate in Italy and that European legislation does not apply to them,” it added.
As a result, the watchdog said it’s blocking access to DeepSeek with immediate effect, and that it’s simultaneously opening a probe.
In 2023, the data protection authority also issued a temporary ban on OpenAI’s ChatGPT, a restriction that was lifted in late April after the artificial intelligence (AI) company stepped in to address the data privacy concerns raised. Subsequently, OpenAI was fined €15 million over how it handled personal data.
News of DeepSeek’s ban comes as the company has been riding the wave of popularity this week, with millions of people flocking to the service and sending its mobile apps to the top of the download charts.
Besides becoming the target of “large-scale malicious attacks,” it has drawn the attention of lawmakers and regulars for its privacy policy, China-aligned censorship, propaganda, and the national security concerns it may pose. The company has implemented a fix as of January 31 to address the attacks on its services.
Adding to the challenges, DeepSeek’s large language models (LLM) have been found to be susceptible to jailbreak techniques like Crescendo, Bad Likert Judge, Deceptive Delight, Do Anything Now (DAN), and EvilBOT, thereby allowing bad actors to generate malicious or prohibited content.
“They elicited a range of harmful outputs, from detailed instructions for creating dangerous items like Molotov cocktails to generating malicious code for attacks like SQL injection and lateral movement,” Palo Alto Networks Unit 42 said in a Thursday report.
“While DeepSeek’s initial responses often appeared benign, in many cases, carefully crafted follow-up prompts often exposed the weakness of these initial safeguards. The LLM readily provided highly detailed malicious instructions, demonstrating the potential for these seemingly innocuous models to be weaponized for malicious purposes.”
Further evaluation of DeepSeek’s reasoning model, DeepSeek-R1, by AI security company HiddenLayer, has uncovered that it’s not only vulnerable to prompt injections but also that its Chain-of-Thought (CoT) reasoning can lead to inadvertent information leakage.
In an interesting twist, the company said the model also “surfaced multiple instances suggesting that OpenAI data was incorporated, raising ethical and legal concerns about data sourcing and model originality.”
The disclosure also follows the discovery of a jailbreak vulnerability in OpenAI ChatGPT-4o dubbed Time Bandit that makes it possible for an attacker to get around the safety guardrails of the LLM by prompting the chatbot with questions in a manner that makes it lose its temporal awareness. OpenAI has since mitigated the problem.
“An attacker can exploit the vulnerability by beginning a session with ChatGPT and prompting it directly about a specific historical event, historical time period, or by instructing it to pretend it is assisting the user in a specific historical event,” the CERT Coordination Center (CERT/CC) said.
“Once this has been established, the user can pivot the received responses to various illicit topics through subsequent prompts.”
Similar jailbreak flaws have also been identified in Alibaba’s Qwen 2.5-VL model and GitHub’s Copilot coding assistant, the latter of which grant threat actors the ability to sidestep security restrictions and produce harmful code simply by including words like “sure” in the prompt.
“Starting queries with affirmative words like ‘Sure’ or other forms of confirmation acts as a trigger, shifting Copilot into a more compliant and risk-prone mode,” Apex researcher Oren Saban said. “This small tweak is all it takes to unlock responses that range from unethical suggestions to outright dangerous advice.”
Apex said it also found another vulnerability in Copilot’s proxy configuration that it said could be exploited to fully circumvent access limitations without paying for usage and even tamper with the Copilot system prompt, which serves as the foundational instructions that dictate the model’s behavior.
The attack, however, hinges on capturing an authentication token associated with an active Copilot license, prompting GitHub to classify it as an abuse issue following responsible disclosure.
“The proxy bypass and the positive affirmation jailbreak in GitHub Copilot are a perfect example of how even the most powerful AI tools can be abused without adequate safeguards,” Saban added.
https://thehackernews.com/2025/01/italy-bans-chinese-deepseek-ai-over.html