Skip to main content

A new attack involving two malicious Python packages, gptplus and claudeai-eng, has been uncovered, delivering the JarkaStealer malware to thousands of victims. These packages, which masquerade as free tools for accessing the APIs of OpenAIโ€™s ChatGPT and Anthropicโ€™s Claude platforms, have raised alarms about the vulnerabilities inherent in open-source ecosystems. If youโ€™ve ever searched for a quick and free way to integrate AI APIs into your projects, you could easily fall victim to these fake ChatGPT and Claude API packages.

The GenAI Gold Rush

Generative AI tools like ChatGPT and Claude have revolutionized how we approach coding, content creation, and problem-solving. By offering unprecedented capabilities in natural language understanding and generation, theyโ€™ve attracted millions of developers seeking to integrate their functionalities into applications.

However, access to premium features of these platforms often requires payment, creating an opportunity for malicious actors. Attackers prey on developers who are eager to bypass these paywalls, offering seemingly legitimate free alternatives.

The fake ChatGPT and Claude API packages, gptplus and claudeai-eng, exemplify this strategy. Claiming to provide API access to GPT-4 Turbo and Claude, these packages promised developers a shortcut to harnessing GenAIโ€™s power. Instead, they delivered malware hidden in plain sight.

The Mechanics of Deception

What makes the gptplus and claudeai-eng attack notable is its subtlety. Both packages were designed to look and feel legitimate:

  • Functionality Mimicry: Once installed, the packages provided limited interaction with ChatGPTโ€™s demo version, giving the appearance of working as advertised.
  • Illusion of Legitimacy: The attackers committed extra effort to ensure the packages looked functional, reducing suspicion among users.

However, beneath this faรงade lay a darker reality. The packages were programmed to deploy a Java archive (JAR) file containing JarkaStealer, a malicious infostealer.

What is JarkaStealer?

JarkaStealer is a low-cost but effective malware sold on the Russian-language Dark Web for as little as $20. While its source code is freely available on GitHub, many attackers opt to buy pre-configured versions, along with modifications ranging from $3 to $10.

Capabilities of JarkaStealer:

  1. Data Exfiltration: Extracts sensitive information, including saved passwords, cookies, and autofill data from browsers.
  2. Session Hijacking: Captures tokens from widely used apps such as Telegram, Discord, and Steam.
  3. Surveillance: Takes screenshots of the victimโ€™s desktop to gather additional context about their activities.
  4. Cross-Platform Impact: Targets both Windows and Linux systems, increasing its reach.

While JarkaStealer isnโ€™t the most sophisticated infostealer, its affordability and availability make it a popular choice for cybercriminals. In this case, the attackers leveraged the hype around GenAI to distribute the malware to unsuspecting developers.

A Year of Stealthy Deception

The fake ChatGPT and Claude API packages remained undetected on PyPI, the official repository for Python packages, for an entire year. During this time, they were downloaded over 1,700 times across more than 30 countries, including the United States, exposing thousands of developers to potential data theft.

Interestingly, download analytics from the PyPI tracking tool ClickPy reveal some peculiar trends:

  • The gptplus package experienced a surge in downloads on its first day, likely due to artificial inflation by the attackers. This tactic aimed to create a false sense of legitimacy, as developers often equate high download numbers with trustworthiness.
  • The claudeai-eng package saw more organic growth, particularly during February and March, indicating genuine interest from developers.

These patterns highlight how attackers strategically manipulate perceptions to exploit trust in open-source ecosystems.

This incident underscores critical challenges in securing open-source platforms. As organizations increasingly rely on open-source software, attackers are exploiting this trust to distribute malware. Cybersecurity professionals must take proactive measures to mitigate such risks.

Protecting Yourself from Fake ChatGPT and Claude API Packages

  1. Always verify the legitimacy of open-source packages before downloading. Check for reputable authors, a history of updates, and community reviews. Suspicious packages often lack these markers.
  2. Tools like Sonatype Nexus, Snyk, and Dependabot can detect malicious or vulnerable dependencies, reducing the risk of compromise.
  3. Developers are often the first line of defense. Conduct regular training on recognizing red flags, such as unusual package names, vague descriptions, or newly published packages with limited documentation.
  4. Adopt a zero-trust approach to supply chain security. Ensure continuous monitoring of software dependencies and maintain a comprehensive bill of materials (SBOM) for all projects.
  5. Encourage teams to stay updated on security advisories related to PyPI and other repositories. Tools like PyPIโ€™s RSS feeds or community forums can provide timely updates on emerging threats.

Conclusion:

As George Apostopoulos from Endor Labs aptly puts it, โ€œAI is very hot, but many services require you to pay. People that donโ€™t know better will fall for this.โ€

The fake ChatGPT and Claude API packages, gptplus and claudeai-eng, are just the tip of the iceberg in a growing wave of attacks targeting GenAI users. For developers, organizations, and cybersecurity professionals alike, the takeaway is clear: proactive vigilance is the best defense.

By fostering a culture of security awareness, implementing robust safeguards, and staying informed about emerging threats, we can ensure that the promise of generative AI is not overshadowed by exploitation. The next time you encounter a โ€œtoo-good-to-be-trueโ€ tool, think twice, because free might just cost you more than you can afford.

About the author:

Leave a Reply