Skip to main content

Cybersecurity adversaries who rely on online tutorials and publicly available exploits have long been a known issue. In recent developments, a more sophisticated breed of these actors has started to emerge.

HP Wolf Security’s latest Threat Insights Report reveals how cybercriminals are increasingly using Generative AI (GenAI) to develop malicious code.

Their researchers uncovered a campaign targeting French users, where the attackers deployed malicious VBScript and JavaScript codes. These codes are believed to have been written with the help of GenAI. The campaign used HTML smuggling, a technique where attackers embed malicious code within HTML files to deliver a  ZIP archive containing these codes.

The codes were designed to create persistence by setting up scheduled tasks and modifying the Windows Registry, thereby setting the stage for further exploitation. The attack resulted in the execution of AsyncRAT, an infostealer capable of logging keystrokes and taking control of victims’ systems.

The researchers noted some distinct markers that suggested the use of AI to craft the malicious codes. These markers include the structure of the codes, detailed comments explaining each line of code, and the use of native language for function names.

A Snippet of the Malicious Code

Credit: HP Wolf Security

Due to these factors, the researchers concluded that GenAI was used to create the malicious code, as the extensive level of documentation is uncommon in human-written malware. It remains unclear which specific AI was used to develop the malicious codes.

This finding confirms suspicions that AI is being used to facilitate malicious coding. As Patrick Schläpfer, Principal Threat Researcher at the HP Security Lab, stated, “Speculation about AI being used by attackers is rife, but evidence has been scarce, so this finding is significant. Typically, attackers like to obscure their intentions to avoid revealing their methods, so this behavior indicates an AI assistant was used to help write their code. Such capabilities further lower the barrier to entry for threat actors, allowing novices without coding skills to write scripts, develop infection chains, and launch more damaging attacks.”

AI manipulation, also known as jailbreaking, continues to be a growing concern. Recently, the popular GenAI, ChatGPT was manipulated to provide instructions for creating a bomb.

These incidents highlight that AI algorithms currently lack the sophisticated reasoning needed to make accurate judgements on requests. Although these systems are generally programmed to refuse certain queries that could indicate malicious intent, deeper probing can sometimes bypass these safeguards. To address these vulnerabilities, extensive AI training and refinement will be vital for enhancing security and effectiveness.

About the author: