Just under two weeks since the launch of DeepSeek’s open-source AI model, the Chinese startup continues to lead the conversation regarding the future of artificial intelligence. The company appears to excel over its U.S. competitors in math and reasoning; however, it is also known for significantly censoring its replies. When users ask DeepSeek R1 about topics like Taiwan or Tiananmen, they are likely to receive no response.
To delve into the technical aspects of this censorship, WIRED conducted tests of DeepSeek-R1 across its app, a third-party platform called Together AI, and a version used on a WIRED computer with the application Ollama.
The investigation revealed that while basic censorship could be bypassed by avoiding DeepSeek’s app, there are other biases ingrained in the model during its training. Although these biases can be eliminated, the process is far more complex.
These insights bear significant consequences for DeepSeek and the broader landscape of Chinese AI firms. If the censorship mechanisms on large language models are easily removable, open-source LLMs from China could gain substantial traction, allowing researchers to tailor the models as they see fit. Conversely, if the filters prove challenging to circumvent, the models may become less functional and fall behind in global competitiveness. DeepSeek did not respond to WIRED’s request for comment.
Application-Level Censorship
Following its surge in popularity in the U.S., users accessing R1 through DeepSeek’s website, app, or API quickly realized that the model refrained from producing responses on topics that are sensitive according to the Chinese government. These refusals occur at the application level, only manifesting when users engage with R1 through a DeepSeek-controlled platform.
Photograph: Zeyi Yang
Photograph: Zeyi Yang
Photograph: Zeyi Yang