Your resource for web content, online publishing
and the distribution of digital products.
S M T W T F S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 

AI’s Glaring Problem: API Security

DATE POSTED:July 11, 2024

AI is revolutionizing every aspect of development, including APIs. Developers are using AI for everything from generating code to testing and API documentation. Even more importantly, APIs are an AI’s nervous system. Large language models (LLMs) use APIs to perform many of their functions. Most LLMs are also available as APIs. Unfortunately, that also means that AI is susceptible to common API security concerns.

Cybersecurity firm Wallarm recently issued the 2024 API ThreatStats Report, an exhaustive overview of API vulnerabilities. According to Wallarm’s report, the sudden surge in LLMs and LLM-driven applications is causing a massive spike in API threats.

What are some of the API security risks caused by AI, though? We’ve delved deep into Wallarm’s ThreatStat Report and some of the most recent thought leadership on the topic of AI and API security to help make sure your AI and AI-driven products are as secure as possible.

How The Rise of AI Creates API Security Vulnerabilities

The explosion of interest in AI is creating an unprecedented API security risk in numerous ways. AIs use APIs to both send and receive data, for one thing. Meanwhile, more APIs are being created than ever before. An increasing number of those APIs are public-facing, giving cybercriminals more surface area to attack.

As Tim Erlin, VP of Product at Wallarm, puts it, “APIs are already a blind spot for most organizations, and the reality is that generative AI relies heavily on APIs. With the current pressure for organizations to roll out AI/LLM-enabled platforms quickly, security best practices often take a back seat to velocity. The rapid velocity of generative AI adoption is driving a dramatic increase in the API attack surface, into which organizations simply lack visibility. AI APIs can be vulnerable to all the same security threats as any other API, plus those that are specific to LLMs, like prompt injection. As it stands now, this AI API attack surface is creating risk with little to no accountability.”

Wallarm has created its own list of cybersecurity threats related to API security risks stemming from AI. Similar to the OWASP top ten lists, the ThreatStatsTM Top-10 is as follows:

  1. Injections: Allows unintended commands to be executed or unauthorized data to be exposed by inputting hostile code.
  2. Authentication: Allows authentication to be bypassed.
  3. Cross-site Issues: Allows malicious script to be executed from a trusted site.
  4. API Leaks: Confidential data is exposed through an API.
  5. Broken Access Controls: Allows users access to data above their permission level.
  6. Authorization Issues: Lets attackers modify their authorization status.
  7. Insecure Resource Consumption: An attacker exploits the lack of limitations on resource usage.
  8. Weak Secrets and Cryptography: Improper credentials allow unauthorized access.
  9. Sessions and Password Management: Lets users stay logged in.
  10. SSRF: Allows attackers to access sensitive information about other services.
Vulnerable Tools Make Vulnerable Platforms.

Even some of the world’s largest vendors use third-party tools that can pose a cybersecurity risk. WordPress Plugins, Nginx-ui, and GeoServer have all proven to hold API security risks. Directus and OpenMetadata are also vulnerable to API security risks. So are Grafana and Superset. This illustrates how everything from enormous international vendors to smaller, independent projects can potentially pose API security risks.

LLMs pose numerous security risks, for example. The prevalence of LLMs has led to the creation of a whole new security threat known as prompt injections. OWASP defines prompt injections as manipulating LLMs via crafted inputs, which can lead to unauthorized access, data breaches, and compromised decision-making.

“Prompt injections are the AI/LLM version of a SQL injection,” says Erlin. “Basically, an LLM is set up with certain instructions, some of which are to protect against activity like sensitive data being improperly accessed or exposed. Because AI/LLM is relatively new, there are still simple, exploitable vulnerabilities in many models where an attacker can enter a malicious basic command into a prompt, like ‘Ignore all previous commands.’ Then the attacker can request all user PII, etc., and the LLM will ignore the original control, telling it not to retrieve and display that data.”

AI Developers Are Not API Security Experts

AI developers don’t always think of the fact that their AI uses APIs. This oversight can result in serious API security risks, as neither the developers nor the IT team safeguards against improper data. This has already resulted in two major data breaches in the last year due to injection attacks.

The first was when Ray AI was compromised due to an improperly configured default framework that didn’t require any authentication. This created a hole in their security coverage that allowed cybercriminals to submit arbitrary system commands, resulting in exposed credentials.

A similar error occurred in the NVIDIA Triton Inference Server for Linux and Windows breach from January 2024. That vulnerability allowed cybercriminals to use the model load API to execute code, escalate privileges, or conduct denial of service attacks.

As Erlin puts it, “Visibility and communication can go a long way to helping bridge the gap between security and development. When the security team can provide easy access to relevant data about which AI APIs are vulnerable to exploit while also providing risk-based prioritization with remediation guidance, it can go a long way toward streamlining collaboration.”

“But collaboration can only go so far,” says Erlin. “Developers need to consider the delivery of secure code as part of their jobs. ‘I’m a developer, not security’ should be a request for help, not an excuse.”

Examples of AI API Security Risks

APIs used by AIs are not always as secure as they should be. This means it’s sometimes possible to bypass the authorization entirely. For example, in February 2024, it came to light that an OAuth2 Client using client_secret_jwt as its authentication method was able to bypass the authorization flow entirely. Other breaches stemmed from access control issues, resulting in malicious actors getting access to things they shouldn’t.

Wallarm API ThreatStats Report concludes with three examples of data breaches involving AI from Q1 2024. The fact that millions of records have been exposed from enormous multinational corporations like Goodyear and Airbus should tell you all you need to know about the severity of the API security issue for AI.

ZenML Takeover

The ZenML Takeover has been one of the year’s most significant data breaches. It’s also one of the most alarming, given the attack’s simplicity and the severity of its consequences. In the ZenML Takeover, attackers targeted an endpoint intended for AI/ML operations, which could be exploited by sending an existing username with a new password. It’s a prime example of how cybercriminals are targeting resources meant for AI and machine learning and the potential risks that can stem as a result.

NVIDIA AI Platform Vulnerability

NVIDIA is a widespread component of distributed AI services. The second largest API breach related to AI is a warning of what could happen when an AI tool gets compromised. The NVIDIA AI platform breach allowed cybercriminals access to remote files with potentially disastrous consequences. This security flaw, known as path traversal, could have resulted in code execution, denial of service, escalation of privileges, information disclosure, and data tampering. It’s a perfect example of why we need to ensure AI tools are secure going forward.

Hail OAuth2 Bypass

The third and final AI data breach mentioned by Wallarm involved an authentication error. Hail, a popular Python-based tool for data analysis, allowed attackers to illicitly create accounts and then match them to the target organization’s domain. This vulnerability could have potentially led to the creation of Azure admin accounts. The Hail OAuth2 Bypass illustrates the need for good validation and verification processes. It’s also a warning of what can happen when a popular tool becomes compromised.

How To Improve API Security For Your AI

Vulnerable APIs can put every level of an organization at risk of attack. A multi-tiered approach to cybersecurity helps to neutralize this threat. Wallarm recommends using continuous vulnerability management, adopting proactive defense mechanisms, and implementing a company-wide culture of security awareness.

That said, certain tools used in cybersecurity management tools like FortiSIEM and Grafana have themselves been susceptible to security issues. Vulnerabilities in Fortinet FortiSIEM allowed attackers to conduct remote code execution in other browsers, for example. Grafana’s been struggling with security issues for years, ranging from cross-site scripting to broken access control. This creates what Wallarm calls the vulnerability loop, where cybersecurity tools used for API security are themselves security risks. Therefore, be careful regarding the cybersecurity tools in use.

Some APIs can do a lot of damage, like deleting an entire inbox for example. To help ensure your AI is as secure as possible, start by listing all APIs that could put your organization at risk. Then, you can put those APIs into an API security workflow, like triggering two-factor authentication for certain risky API calls, for example. Be careful not to overdo it with the authorizations, though, as you don’t want to annoy your users.

Final Thoughts on AI and API Security

AI isn’t going anywhere. That means APIs are only going to become more prevalent. This could result in a major cybersecurity cataclysm if we’re not careful. There have already been more data breaches due to lapses in API security since Wallarm published their report, says Erlin. For example, the Ticketmaster data breach exposed over 560 million accounts. The PandaBuy Data Breach exposed 1.3 million accounts. The Dell data breach exposed 49 million user’s personal information.

Thankfully, there are techniques developers can use to avoid some of these risks going forward. “Discovery, visibility, active protection, and rapid response capabilities are all important,” says Erlin. “But shifting left and implementing API security testing in production and CI/CD environments can help promote a more collaborative approach to API and application security into the software development lifecycle.” In addition to securing AI APIs, understanding the OWASP Top 10 for Large Language Model Applications is a good starting point, he adds.