Skip to main content

Artificial Intelligence (AI) is transforming businesses at a rapid pace, but it also introduces new security challenges. The latest 2025 API ThreatStats Report by Wallarm highlights a critical issue: AI security is closely tied to API security.

Researchers discovered that in the past year, AI-related vulnerabilities increased by 1,025%, with almost all of them linked to insecure APIs. This means companies using AI must reassess their security strategies to avoid potential breaches.

It was noted that by the end of 2024, over half of enterprises had integrated AI into their operations. However, many AI systems depend on APIs that lack proper security.

This growing reliance on AI has introduced new security vulnerabilities, with hackers exploiting weak AI-driven APIs in various ways:

  • Injection Attacks & Misconfigurations: Poor API configurations allow attackers to manipulate AI models or access sensitive data.
  • Memory Corruption & Overflows: AI systems using high-performance computing are vulnerable to buffer overflow attacks.
  • AI Model Poisoning & Data Theft: Attackers exploit security gaps in AI tools like PaddlePaddle and MLflow to tamper with models and steal data.

Key findings from the report include the following:

  • 57% of AI-powered APIs were accessible externally, increasing their exposure to attacks.
  • 89% had weak authentication methods, such as static keys that attackers can exploit.
  • Only 11% used strong security controls, like temporary access tokens.
  • More than 50% of security breaches in 2024 were API-related, up from 20% in 2023.
  • One-third of attacks targeted modern APIs, such as REST and GraphQL, while nearly 19% affected legacy systems.
  • API-related breaches tripled in 2024, impacting major companies like Dell (49 million records leaked) and Twilio (33.4 million phone numbers exposed).

To mitigate risks, the researchers recommended the following security measures:

  • API Endpoints should be identified to maintain visibility over all APIs, including undocumented shadow APIs.
  • Authentication should be improved by replacing static keys with OAuth 2.0, JWTs with expiration times, and implementing multi-factor authentication.
  • Rate Limiting should be implemented to prevent API abuse by setting dynamic usage limits.
  • AI should be leveraged for security by using AI-driven security tools to monitor and respond to threats in real time.

As AI reshapes industries, securing AI-driven APIs must remain a top priority. The growing number of AI-related vulnerabilities, particularly in insecure APIs, reinforces the urgency for stronger security measures. By enforcing robust authentication, applying rate limiting, and leveraging AI-driven security tools, organizations can significantly reduce their risk of breaches.

About the author: