There has been both tremendous excitement and significant controversy surrounding the Chinese AI chatbot ‘DeepSeek,’ developed by the Hangzhou-based company of the same name. It serves as a concerning example of how AI can be influenced by the political environment in which it operates
Becoming an overnight sensation, Chinese AI chatbox DeepSeek has made waves in Silicon Valley, stunning investors and industry insiders with its ability to match the skills of its Western competitors at a fraction of the cost. The DeepSeek app has surged to the top of Apple’s App Store, dethroning OpenAI’s ChatGPT, and people in the industry have praised its performance and reasoning capabilities. However, its swift global rise, topping app charts, has also brought scrutiny and privacy concerns. Reports suggest that DeepSeek is programmed to align with the Chinese Communist Party’s stance on politically sensitive topics. Simply put DeepSeek is “programmed” to provide answers that toe the government line.
Queries about politically sensitive topics are responded to with answers that indicate a clear commitment to CCP narratives. For instance, when asked about the 1989 crackdown on pro-democracy protesters in and around Tiananmen Square in Beijing, DeepSeek said it “cannot answer that question.” When asked why it cannot go into further detail, DeepSeek explained that its purpose is to be “helpful” — and that it must avoid topics that could be “sensitive, controversial or potentially harmful.”
Similarly, when asked about allegations of human rights abuses by Beijing in the northwestern Xinjiang region, the DeepSeek app accurately listed many of the claims detailed by rights groups — from forced labour to “mass internment and indoctrination.” However, after a couple of seconds that answer disappeared, replaced with the insistence that the question was “beyond my current scope.” “Let’s talk about something else,” it said.
When asked if the Chinese government looked at TikTok user data, DepSeek responded: “The server is busy. Please try again later.” The chatbot repeated this answer multiple times.
When asked to detail what it knew about US President Donald Trump, DeepSeek went into great detail about the “mercurial magnate’s populist policies” and criticised his attempts to “undermine democratic norms.” However, when asked the same question about Chinese President Xi Jinping, it replied, “Talk about something else.”
When asked about Taiwan, the app said that “many people” on the island consider it a sovereign nation. However, that answer was quickly scrubbed and replaced with the usual entreaty to “talk about something else,” as was a question about whether Taiwan was part of China. When asked whether the two would be reunified, DeepSeek said: “Taiwan is an inalienable part of China.” It added that Beijing was committed to the “great cause” of returning Taiwan under China’s control and independence efforts were “doomed to fail.”
The fact the company is based in China is causing concern as China’s National Intelligence Law states that companies must “support, assist and cooperate” with state intelligence agencies. It means that any data shared on mobile and web apps can be accessed by Chinese intelligence agencies. A recent cybersecurity incident involving DeepSeek AI reportedly exposed over a million log entries, including sensitive user interactions, authentication keys, and backend configurations. This misconfigured database highlights deficiencies in DeepSeek AI’s data protection measures, further amplifying concerns regarding user privacy and enterprise security. The Chinese government in 2023 issued a directive mandating that its scores of domestic companies developing AI must teach their models to stick to “core socialist values” — the euphemism for supporting the ruling party’s narratives.
According to DeepSeek’s privacy policy, it collects large amounts of personal information collected from users, which is then stored “in secure servers” in China. This may include mail address, phone number and date of birth, entered when creating an account, any user input including text and audio, as well as chat histories and “technical information” – ranging from the phone’s model and operating system to user’s IP address and “keystroke patterns”.
US-based AI security and compliance company Enkrypt AI claims to have found that DeepSeek-R1 is 11 times more likely to generate harmful output compared to OpenAI’s o1 model. “Our research findings reveal major security and safety gaps that cannot be ignored,” Enkrypt AI CEO Sahil Agarwal said in a statement. DeepSeek-R1 is susceptible to generating harmful, toxic, biased, and insecure content. For instance on harmful and extremist content, in 45 per cent of harmful content tests, DeepSeek-R1 was found to bypass safety protocols and generate criminal planning guides, illegal weapons information, and extremist propaganda. In one concrete example, DeepSeek-R1 drafted a recruitment blog for terrorist organisations.
Following DeepSeek’s release of its powerful new reasoning AI model called R1, which rivals technology from OpenAI, the US Navy instructed its members to avoid using artificial intelligence technology from China’s DeepSeek. In a warning issued by email, the US Navy said that DeepSeek’s AI was not to be used “in any capacity” due to “potential security and ethical concerns associated with the model’s origin and usage.” In a stern warning, US Congressional offices too have been issued a warning against utilising DeepSeek.
As the Chinese chatbot continues to cause turmoil in the markets and in the tech industry, Italy’s Data Protection Authority, known as Garante, has imposed an immediate ban on the Chinese AI application DeepSeek, citing concerns over user data protection. Australia’s science minister, Ed Husic, has become the first member of a Western government to raise privacy concerns about DeepSeek.
DeepSeek AI’s compliance posture has been questioned by legal analysts and regulatory bodies because of insufficient disclosures regarding how user data is processed, stored, and shared. The model’s data retention policies may conflict with extraterritorial regulations, prompting legal scrutiny in global markets. While DeepSeek-R1 delivers advancements in AI efficiency and accessibility, its deployment requires a comprehensive security strategy.