When DeepSeek first appeared in the software stores a few weeks ago, promising to offer the same types of high-performing artificial intelligence models as the well-known players like OpenAI and Google for a much lower price, it rocked the tech industry and the financial markets.
In contrast to the recently-vamped open-source AI assistant TikTok, which members of Congress overwhelmingly voted to outlaw last year, some in state and data protection fear that its connections to China had put American information at risk.
Those issues aren’t limited to DeepSeek. Yet aside from the national security flag-waving that occurs in legislative halls, all who downloads AI chatbot apps to their phones should keep in mind. We’ll describe some useful tips below.  ,
The Chinese Communist Party’s ability to access information collected by DeepSeek and another Chinese-owned programs, as well as the possibility for DeepSeek to be used to distribute Chinese deception, were two US House members who on Thursday to boycott the software on all state devices.
” This is a five alarm national security fireplace”, US Rep. Josh Gottheimer, a New Jersey Democrat, said in a statement, adding that the country can’t harm China being ready to”infiltrate” the products of government workers and possibly put national security at risk.  ,
” We’ve seen China’s handbook before with TikTok, and we cannot allow it to happen again”, Gottheimer said.  ,
The game was banned from government products last week in Australia. is one of the first state in the US to do this. Additionally, the governor of New York on Monday declared a state-wide DeepSeek restrictions on federal equipment and systems.
DeepSeek’s ties to China, as well as its wild popularity in the US and the media hype surrounding it, make for an easy comparison to TikTok, but security experts say that while the DeepSeek’s data security threats are true, they’re different from those of the social media platform.
Even though DeepSeek may be the hottest new AI associate right now, there are numerous new AI versions and versions coming out, making it important to be cautious when using any kind of AI program.
In the meantime, it’s going to be a tough market to get the average individual to avoid getting and using DeepSeek, said Dimitri Sirota, CEO of BigID, a security company that specializes in AI safety compliance.
” I think it’s tempting, especially for something that’s been in the news so much”, he said. ” I think to some degree, people just need to make sure they operate within a certain set of parameters” . ,
Why are people worried about DeepSeek?
Similar to TikTok, DeepSeek has ties to China, where user data is sent back to cloud servers there. Similar to TikTok, which is owned by China-based ByteDance, DeepSeek is required by Chinese law to give user data to the government if requested by the government.
Legislators on both sides of the aisle were concerned about how the Chinese Communist Party might use US user data to compile intelligence reports or how the app itself might be modified to infuse American users with Chinese propaganda as a result of TikTok. In the end, those concerns led Congress to pass a law last year that would outlaw TikTok unless it is sold to a buyer deemed appropriate by US officials.
But getting a handle on DeepSeek, or any other AI, isn’t as simple as banning an app. Unlike TikTok, which companies, governments and individuals can choose to avoid, DeepSeek is something people might end up encountering, and handing information to, without even knowing it.
The average consumer probably won’t even know what AI model they’re interacting with, Sirota said. According to what tasks need to be completed, many businesses already have more than one type of AI model in place, and the “brain” or specific AI model powering that avatar could even be” swapped” with another in the collection of the company while it is being used.
Meanwhile, there is no stopping the buzz about AI in general. More models from other companies, including some that’ll be open-source like DeepSeek, are also on the way and will certainly grab the future attention of companies and consumers.  ,  ,  ,  ,  ,  ,
As a result, focusing on DeepSeek removes only some of the data security risks, said Kelcey Morgan, Rapid7’s senior manager of product management.
Instead of focusing on the model that is currently in the spotlight, businesses and consumers must determine the level of risk they want to take with all forms of AI, as well as implement policies to safeguard data.
” That’s regardless of whatever hot thing comes out next week”, Morgan said.
Could the Chinese Communist Party use DeepSeek data to gather intelligence?
According to cybersecurity experts, China has the resources to mine the sizable amounts of data that DeepSeek has gathered and combine it with other sources of data to create profiles for American users.
” I do believe we’ve entered a new era where compute is no longer the constraint,” Sirota said, citing the capabilities of businesses like Palantir Technologies, which produces software that allows US agencies to gather sizable amounts of data for intelligence purposes. China also has the same kinds of capabilities.  ,
China is happy to play the long game and waits to see if any of the people who use DeepSeek develop into influential people worth potentially targeting, Sirota said. However, the people playing around with it may be young and largely unimportant now.
Andrew Borene, executive director at Flashpoint, the world’s largest private provider of threat data and intelligence, said that’s something people in Washington, regardless of political leanings, have become increasingly aware of in recent years.
” We know that the policymakers are aware, we know the technology community is aware”, he said. I don’t know if the American consumer is necessarily aware of those risks, where those data might go, or why that might be a concern, in my opinion.
Borene urged anyone working in the government to use DeepSeek with the “highest levels of caution,” but he also urged all users to be aware that Chinese officials might have access to their information.
” That’s an important factor to consider”, he said. ” You didn’t need to read the privacy policy to know that”.
Keep your private information private.
How to protect yourself when using DeepSeek or other AI models
Experts advise using any of them because it can be challenging to know which AI model you’re actually using for a lot of the time.
Here are some helpful hints.
Be intelligent with AI, just like you are with everything else. The typical best practices for tech also apply here. Set long, complicated and unique , always enable when you can, and keep all your devices and software updated.  ,
Keep personal info personal. Before entering personal information into an AI chatbot, take some time. Yes, this covers obvious no-no’s like Social Security numbers and banking information, but also the kinds of details that might not automatically set off alarm bells, like your address, place of employment, and friends’ or coworkers ‘ names.
Be skeptical. Just like you’d be wary of information requests that come in the form of emails, texts or social media posts, you should be concerned about AI queries, too. Think of it like a first date, Sirota said. If a model asks strangely personal questions when you first use it, leave.
Don’t rush to be an early adopter. According to Morgan, you don’t need to have an AI or app right away just because it’s trending, either. Decide for yourself how much of a risk you want to take with brand-new software.
Read the terms and conditions. Yes, this is a lot of questions, but with any app or software, you should read these statements carefully before handing over data to get an idea of where it’s going, what it’s being used for, and who it might be shared with. According to Borene, those statements could also reveal information about whether an AI or app is sharing data with other devices. If that’s the case, turn those permissions off.  ,
Be aware of America’s adversaries. According to Borene, any app developed in China should be viewed with suspicion, as should those from other hostile or underdog states like Russia, Iran, or North Korea. Despite what the terms and conditions say, privacy rights you might enjoy in places like the US or the European Union won’t apply to those apps.