Taiwan bans DeepSeek AI because of concerns about national security and data leak hazards.

In response to safety concerns, Taiwan has become the latest nation to impose a moratorium on government agencies from using the Artificial platform from Chinese startup DeepSeek.

According to a statement released by Taiwan’s Ministry of Digital Affairs, “government companies and critical equipment should not use DeepSeek, because it endangers regional information security,” according to .

” DeepSeek AI company is a Chinese goods. Its function involves cross-border distribution, and information leakage and additional data security issues”.

Authorities from various nations have begun to investigate how the service uses specific data because of its Chinese roots. Last year, it was in Italy, citing a lack of information regarding its data management techniques. In addition, a number of businesses have prohibited exposure to the bot due to similar dangers.

The robot has attracted a lot of attention in recent months due to the fact that it’s available resource and as competent as other top-notch models, but was built at a fraction of the cost of its contemporaries.

However, it has been discovered that the platform’s large language models ( LLMs) are susceptible to jailbreak methods, which is a persistent issue with these products, as well as causing controversy by to topics considered sensitive by the Chinese government.

DeepSeek’s popularity has also resulted in “large-scale malicious attacks,” with NSFOCUS reporting that it has detected three waves of distributed denial-of-service ( DDoS ) attacks targeting its API interface between January 25 and January 27, 2025.

” The regular attack length was 35 days”, it . NTP and memcached projection attacks are two of the main harm methods.

On January 20, the day it launched its logic design DeepSeek-R1, the DeepSeek bot program was targeted twice by DDoS attacks, according to the report. 25 averaged roughly one-hour using techniques like NTP representation attacks and SSDP reflection attacks.

The sustained activity largely originated from the United States, the United Kingdom, and Australia, the risk intelligence company added, describing it as a “well-planned and organized attack”.

Malicious actors have also profited from the controversy surrounding DeepSeek by publishing fake packages on the Python Package Index ( PyPI ) repository, which are intended to spoof developer information from developers. Ironic in a way, there are indications that the Python text was created with the assistance of an AI helper.

The packages, deepseeek and deepseekai, were saved at least 222 times before being removed on January 29, 2025, as well as being a Python API customer for DeepSeek. A majority of the downloading came from the U. S., China, Russia, Hong Kong, and Germany.

Russian security firm Positive Technologies stated that the functions in these packages are intended to spook environment variables and acquire users and computer data. ” The creator of the two items used Pipedream, an integration program for developers, as the command-and-control site that receives stolen data”.

The Artificial Intelligence Act, which bans AI applications and systems that pose an intolerable threat and imposes certain legal requirements for high-risk programs, became effective on February 2, 2025 in the European Union as a result of the creation.

In a related walk, the U.K. government released a fresh that aims to protect AI networks from hackers and damage, as well as ensure that they are being developed in a safe manner.

Meta, for its part, has its Frontier AI Framework, noting that it will stop the development of AI types that have been determined to have reached a critical threat level and cannot be mitigated. Some of the cybersecurity-related possibilities highlighted include-

  • Automated end-to-end compromise of a best-practice-protected corporate-scale environment ( e. g., Fully patched, MFA-protected )
  • Prior to defenders discovering and patching essential zero-day vulnerabilities in already widely used, security-best-practices software, automated discovery and dependable exploitation of them.
  • Automated end-to-end scam flows ( e. g., aka ) that could result in widespread economic damage to individuals or corporations

The possibility that AI techniques might be used to carry out a malicious attack is no implausible. Last week, Google’s Threat Intelligence Group ( GTIG ) that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia have attempted to use Gemini to enable and scale their operations.

Concern actors have also been spotted attempting to jailbreak AI models in an effort to circumvent their moral and safety standards. It’s a kind of hostile attack designed to compel a model to produce outputs that have been specifically instructed never to, like creating malware or providing bomb-making instructions.

Anthropic, an AI company, created a new line of defense called , which it claims is protect models from general jailbreaks in response to persistent concerns raised by jailbroken attacks.

These Constitutional Classifiers are input and output classifier trained on chemically generated data that filter the vast majority of jailbreaks with little overrefusals and without raising a sizable assess overhead, the company on Monday.

Found this post interesting? Following us on and Twitter to access more unique content.

DNS checker

Leave a Comment