Skip to main content

A server belonging to Vyro AI, a generative AI company was recently exposed, leaking sensitive user data.

Vyro AI is behind a suite of popular AI applications, including Imagine AI Art Generator, Imagine API Developer Platform, Chatly, and Chatbotx. The Pakistan-based company reports over 150 million app downloads across its portfolio.

According to Cybernews researchers, the exposed server contained around 116GB of live logs, covering both production and development environments. The logs stored about 2โ€“7 daysโ€™ worth of data. They were first indexed by IoT search engines in mid-February, meaning attackers may have had access for months.

The leaked information included:

  • User prompts
  • Authentication tokens
  • User agents

Leaked User Prompt

Source: Cybernews.com

With Vyro AIโ€™s massive user base, this incident is particularly concerning. Attackers could use the exposed data for account takeovers, locking users out of their accounts. Once inside, they could access chat histories, generated images, and even make unauthorized AI credit purchases. Since prompts often contain private or sensitive information, this also raises serious privacy risks.

The key takeaway is that while building cutting-edge AI products, companies cannot afford to neglect basic cybersecurity practices. Protecting the infrastructure behind these tools is just as important as the innovation itself.

At the same time, users should be cautious about the information they share in AI prompts, as breaches like this may expose their private data.

About the author: