Skip to main content

Chinese AI startup DeepSeek is facing a series of ongoing cybersecurity challenges.

DeepSeek has recently gained massive popularity, with its reasoning model, R1, hailed as a breakthrough in efficiency and performance. However, its rapid growth has also made it a prime target for cyber threats.

Security researchers at cloud security firm Wiz discovered that DeepSeek had left a ClickHouse database unprotected on the internet. This exposed more than a million log entries containing chat history, secret keys, backend details, API secrets, and other operational metadata.

The database, accessible without authentication, could have allowed attackers to execute arbitrary SQL queries and gain complete control over DeepSeek’s internal systems.

Following Wiz’s report, DeepSeek acted quickly to secure the database and close the vulnerability. However, it is unknown whether any malicious actors accessed or downloaded the exposed data before the issue was patched.

In addition to the security breach, DeepSeek has faced significant disruptions on its DeepSeek-V3 chat platform. While the issue appears to be linked to a surge in demand, it remains unclear whether the platform is experiencing a Distributed Denial-of-Service (DDoS) attack or simply struggling with capacity issues. Existing users can still log in, but new registrations have been temporarily suspended to ensure stability.

Cybersecurity firm KELA has also reported that DeepSeek’s AI model is vulnerable to jailbreak techniques, allowing it to generate malicious outputs, including ransomware development and detailed instructions for creating harmful substances.

These issues come at a time when DeepSeek’s AI Assistant app has overtaken ChatGPT as the most downloaded app on the Apple App Store.

Beyond immediate security concerns, DeepSeek is also facing scrutiny over its data privacy practices. Authorities in Italy have raised concerns about how the company collects and manages user data. As a result, DeepSeek’s apps were removed from app stores after the country’s data protection regulator initiated an inquiry.

Further complicating matters, DeepSeek is under investigation for potentially using OpenAI’s technology without permission to train its models. Reports indicate that OpenAI and Microsoft are examining whether DeepSeek engaged in a practice known as distillation, which involves replicating AI models by training on their outputs.

DeepSeek has faced scrutiny following recent media attention, which has uncovered more vulnerabilities in the platform. While the company has patched the security issue and is working to strengthen its defences, it continues to manage its cybersecurity struggles.

This highlights the importance for new companies, especially in the AI sector, to implement robust cybersecurity processes from the outset to safeguard their systems and maintain user trust.

About the author: