Can You Trust DeepSeek AI? Security Researchers Say No

4th Feb 2025
Can You Trust DeepSeek AI? Security Researchers Say No

Some researchers at the University of Pennsylvania have put the new DeepSeek AI model through a series of security tests, and the results are shocking. This research reveals the weak security of this new AI model that has been breaking the internet over the past few weeks.

DeepSeek AI Model Smashes Performance But Flops Security

Compared with other leading AI models, DeepSeek’s new R1 reasoning model boasts serious performance. This new AI model has become the top choice for tons of users around the world ahead of models like OpenAI’s ChatGPT.

However, the Chinese firm’s new product that has made a name for itself in the AI industry has a very bad fault. This fault has to be with the reasoning model’s ability to detect and block harmful prompts inputted by users.

Researchers at the University of Pennsylvania report that after throwing a lot of harmful prompts at the reasoning model, it failed to identify and block these prompts. The researchers claim that they were able to achieve a “100 percent attack success rate” during their testing of this reasoning model from the Chinese firm DeepSeek.

Additionally, the DeepSeek AI model is also able to spew out content that is also considered sensitive by the Chinese government. This shows that the model isn’t well-trained to identify and block harmful contents or prompts, making it a useful tool in the possession of bad actors.

DJ Sampath, VP of product at Cisco, points out that investments into DeepSeek’s R1 reasoning model “has perhaps not gone into thinking through what types of safety and security things you need to put inside of the model.” This is a clear downside in comparison with other well-established AI models from firms like OpenAI, Google, and the like that have gone through years of refinement.

What Lies In The Future For DeepSeek AI Model

Ever since the launch of the DeepSeek AI Model, there has been a ton of backlash from rival AI firms. Some claims have been made about the training process of the Chinese firm’s new reasoning model, as it might have relied on other AI models.

Now, that security claims have surfaced on the internet, this might move many to wonder what the future holds for DeepSeek. Regarding the identified security issues, these are problems that were also faced by top AI models like ChatGPT.

Years of training have made ChatGPT able to understand and block various harmful prompts, hence being user-friendly. DeepSeek will need to improve its security system to catch up with the competition in terms of security.
However, firms like Alibaba, Huawei, Microsoft, and Amazon have already started offering DeepSeek R1 reasoning models for use on their various cloud servers. While the future in the AI industry looks bright for DeepSeek these security concerns need to be addressed as soon as possible.

Leave a Reply Your email address will not be published. Required fields are marked *

*

Related Articles

Explore Orbital Today