Why Generative AI is a Threat to API Security

AI is making waves across the entire tech industry. Generative AI, like large language models (LLMs), is set to empower programmers with all sorts of new capabilities to quickly develop positive end-user experiences. But with this power comes a dark underbelly. Because in the hands of malicious actors, next-generation AI can be used to amplify their nefarious deeds against web applications.

The problem is a pressing concern for application programming interfaces (APIs), especially public-facing interfaces that share data with external partners. For example, API attackers might leverage AI to appear as legitimate traffic, helping them remain below the radar of traditional monitoring systems as they hunt for vulnerabilities like broken authorization or business logic flaws. And as defenders level up their own AI-driven security systems, some opine we are edging toward an arms race of ‘good AI’ versus ‘bad AI.’

AWS Builder Community Hub

I recently met with Tom McNamara, CEO of cybersecurity technology company Hopr.co, to consider the ramifications of new AI advancements. Below, we’ll examine why AI is a threat to APIs. We’ll explore how attackers could weaponize generative AI and consider what defenders can do to respond.

Why AI Poses a Threat to APIs

AI can be used for good or evil purposes, said McNamara. For good, it can certainly help developers train on proper coding patterns and catch errors. On the other hand, it can run through scenarios very quickly. Its ability to quickly test various patterns, pull in data and chain tactics together will likely expose gaps in API endpoints and the underlying logic. “The advantages are more for them than us,” he explained. “It’s like death from a thousand cuts.”

Specifically, AI could be used to abuse APIs and applications. This poses a serious threat simply because there are such large numbers of APIs in production these days, explained McNamara. A large organization might have hundreds of containerized workloads in the cloud, with thousands of API endpoints with varying permission levels. The attack surface is large, and generative AI could sift through it very quickly, explained McNamara.

And if an API is exploited, it could pose serious consequences for businesses and end users alike. For example, finance is a sector that commonly uses APIs under the covers to extract and share data with partners. If a shadow endpoint from an insurance API is breached, for instance, it could leak sensitive data, undermining user trust and costing hefty fines for exposing personally identifiable information (PII).

How Attackers Could Use AI for Nefarious Purposes

Attackers are well aware of defensive techniques. Thus, they could train generative AI to work around things they know a defensive system is looking for, explained McNamara. This might involve changing the IP address and making many smaller-sized requests. Or, as Dana Epp put it, API attackers could “embed AI capabilities inside of malware itself to obfuscate and evade traditional security controls.”

Also, generative AI could be used to compromise credentials in many ways. First, it could guess human-generated passwords by detecting patterns and using common combinations. Or, it could be used for credential stuffing, making requests at a slow enough rate to stay hidden. This is actually quite common—Salt Labs recently found that 80% of attacks happen over authenticated APIs.

Some attacks are as simple as stealing API keys through phishing tactics and spoofing identities. Of course, alpha-numeric API keys are typically randomly-generated strings and hard to guess, but generative AI could be tasked to collect leaked keys in areas like public repositories.

How API Defenders Should Respond

So, how should API owners respond? Well, one first area is to plug gaps to ensure logical flows can’t be abused. “The logic of the API is where generative AI could really have a stronger negative impact,” said McNamara, “by very quickly breaking down the logic a business is using with their API and how the data is structured and what the requests look like.”

The OWASP API Security Top 10 now mentions Unrestricted Access to Sensitive Business Flows as a top API risk. And generative AI could help attackers introspect schemas to find these gaps. Thus, organizations need to embrace a zero-trust mindset when sharing information about APIs with internal developers or partner consumers.

Responding to new generative AI-fueled API threats is going to be a learning experience, and it’s hard to predict how this will unfold, said McNamara. Regardless, he suggested some additional measures to consider:

  • Secure the transport layer with TLS
  • Shift security left to introduce better coding hygiene and close holes for AI threats earlier.
  • Self-test APIs using white hat hacking techniques to simulate attacks before going into production.
  • Use contract testing to inform drift detection in real-time.

Level Up Your AI Game

Much of today’s perimeter security is only based on detecting malformed requests or spotting malware in messages. Moreover, you don’t typically have much foreknowledge of when or where an attack will hit. Instead, you have to wait for events to occur and then learn from them, explained McNamara.

What we do know is that the two sides will likely continually escalate their use of AI. Because if the defender’s AI/ML isn’t considering new datasets, it won’t be as effective at preventing these novel attack strategies. Thus, API providers will require more advanced countermeasures to detect and defeat AI-based adversaries quickly in runtime.

Avatar photo

Bill Doerrfeld

Bill Doerrfeld is a tech journalist and analyst based in Seattle. His beat is cloud technologies, specifically the web API economy. He began researching APIs as an Associate Editor at ProgrammableWeb, and since 2015 has been the Editor at Nordic APIs, a high impact blog on API strategy for providers. He loves discovering new trends, researching new technology, and writing on topics like DevOps, REST design, GraphQL, SaaS marketing, IoT, AI, and more. He also gets out into the world to speak occasionally.

bill-doerrfeld has 17 posts and counting.See all posts by bill-doerrfeld