R Street: Tech companies ‘ approach to DeepSeek offers tempting option to restrictions

In the News

Major tech companies including Microsoft and Amazon have taken steps to quickly — and securely — integrate the DeepSeek R-1 artificial intelligence model into their services, according to the free market-oriented R Street Institute, which cites the industry developments as providing an alternative to “all or nothing” debates over open-source models and export controls to counter the Chinese firm’s advances on AI.

DeepSeek’s January announcement of a high-performance AI model prompted a swirl of policy proposals, from former Federal Communications Commission Chair Tom Wheeler’s call for tougher  to promote domestic tech-sector competition to competing calls on or from AI export controls targeting China.

But R Street’s Haiman Wong writes in , “Reactionary policies that restrict AI development under the guise of security could ultimately do more harm than good, stifling domestic innovation while failing to curb global AI risks.”

“Instead,” she writes, “the United States should use this moment to be a leader in AI and emerging technologies by investing in AI security research and development, establishing risk-based AI governance frameworks, and expanding data centers and energy grids to support demands and efforts to scale.”

Wong writes, “By leveraging the strengths and strategies that have historically propelled its technological progress, the United States will remain a leader in AI and emerging technologies for decades to come.”

On the cybersecurity shortcomings in DeepSeek R-1, Wong says, “Yet, despite myriad concerns, leading technology companies including Microsoft, Amazon Web Services, and Cerebras were busy finding secure ways to integrate DeepSeek-R1 into their platforms, services, and ecosystems.”

“Within days,” she says, “each of these firms had successfully done so. At first glance, this might seem like a contradiction — Microsoft, in particular, is the largest cybersecurity company in the United States. However, this jump to embrace DeepSeek-R1 highlights a reality that has been overlooked: Securing leadership in artificial intelligence (AI) is not just about whose models are the most accessible — it is also about who is the most secure, reliable, and trustworthy.”

She cites three key policy implications emanating from the DeepSeek development:

First, if China and DeepSeek’s claims about the R1 model’s performance capabilities hold true, the AI arms race between China and the United States may be closer than many have anticipated. This is a critical moment for us to define what it means to “win” and establish clear measures of AI competitiveness.

Second, the line between open-source and closed-source AI models is blurring quickly, complicating traditional cybersecurity frameworks that assume greater control equals greater resilience. This also means open-source AI is here to stay, holding the potential to surpass even proprietary AI models in the future.

Finally, policymakers should recognize that AI development and deployment do not operate in complete isolation. Unlike traditional software or hardware, AI models rely heavily on shared datasets, distributed infrastructure, and iterative improvements from a broad ecosystem of users, researchers, and developers. This makes controlling or restricting AI more complex than simply banning a mobile application like TikTok or regulating hardware imports. Once an open-weight model is released, it remains widely accessible — meaning any restrictions within the United States would have little impact on global use.

Wong says, “Microsoft’s decision to incorporate DeepSeek-R1 into Azure AI Foundry offers a blueprint for secure deployment. Rather than blocking it, Microsoft quickly brought R1 into a controlled, monitored environment, allowing users to work with the model without exposing sensitive data or expanding attack surfaces.”

She writes, “A similar approach — running open-source AI models locally on air-gapped servers and enforcing encryption and access controls — can allow independent developers to study, customize, and leverage open-weight AI securely. Policymakers are right to be vigilant, but they should also recognize that overly restrictive bans on open-source AI models could hinder the United States’ ability to compete effectively in the ongoing AI arms race.”

“For this reason,” Wong says, “the United States should focus on building security frameworks and best practices that allow for safe use, just as Microsoft and other leading technology companies have started to do.”

DNS checker

Leave a Comment