Skip to main content

Imperva researchers have discovered a pattern of attackers using AI to facilitate attacks on e-commerce websites.

With the end-of-year holidays approaching, the shopping season is a critical time for retailers. Attackers are exploiting this high-pressure period to launch a range of AI-facilitated attacks when availability is key and retailers cannot afford downtime.

In a six-month study conducted from April to September, data from Imperva Threat Research indicates that e-commerce websites collectively experience an average of 569,884 AI-driven attacks daily. These attacks originate from popular generative AI tools like ChatGPT, Claude, and Gemini, as well as specialized bots designed to scrape websites for Large Language Model (LLM) training data.

The analysis revealed that attackers primarily use AI to facilitate the following types of attacks:

1. Business Logic Abuse (30.7%):

Business Logic Abuse accounted for the highest percentage of attacks. This type of attack exploits the legitimate functionality of a web application. Unlike traditional attacks that target vulnerabilities in the code or infrastructure, it manipulates the intended processes of an application to perform unauthorised actionsโ€”such as price manipulation during online checkout to pay less. AI enables attackers to automate these exploits at scale, making them harder to detect.

2. DDoS Attacks (30.6%):

A Distributed Denial of Service (DDoS) attack occurs when multiple computers, usually part of a botnet, flood a target system with an overwhelming amount of requests. Cybercriminals are now using AI to coordinate large botnets more efficiently, enhancing the effectiveness of these attacks. This consumes the system’s resources, causing it to slow down or crash, thereby denying access to legitimate users.

3. Bad Bot Attacks (20.8%):

Bad bot attacks involve malicious automated scripts that perform actions such as scraping pricing data, credential stuffing, and inventory hoarding (scalping).ย  The notorious Grinch bot is well-known for hoarding inventory during the holiday shopping season, making it harder for consumers to buy high-demand products. With AI, bot operators can now create bots that mimic human behaviour, allowing them to evade traditional security measures.

4. API Violations (16.1%):ย 

As websites increasingly rely on APIs to enable data exchange and enhance user experiences, attacks targeting them are on the rise. Cybercriminals exploit vulnerabilities in APIs to gain unauthorised access to sensitive data or functionality. AI tools enable these adversaries to identify weaknesses in API implementations, thereby making mitigation efforts challenging.

Retailers are not the only ones affected; customers are also at risk. Successful attacks can result in the disclosure of sensitive information, such as credentials and credit card details, which may then lead to identity theft, financial loss, and phishing attacks.

Researchers warn that retailers must prepare for AI-driven threats during this holiday season.

To prevent these attacks, the researchers recommend the following:

  1. Implement strict validation on all user inputs.
  2. Employ anomaly detection systems to identify unusual activities.
  3. Regularly audit business processes to identify functionalities that could be abused.
  4. Invest in a DDoS protection solution that uses machine learning to detect and mitigate malicious traffic in real-time.
  5. Implement bot management solutions that apply behavioural analytics to distinguish between genuine users and sophisticated bots.
  6. Enforce strict authentication and authorisation protocols.
  7. Implement rate limiting to prevent abuse.
  8. Regularly conduct comprehensive security assessments and penetration testing.

By implementing these recommendations,ย  retailers can better safeguard their operations and customer data.

About the author: