IRONSCALES Brings Generative AI to Email Security

IRONSCALES this week made available in beta a tool that leverages OpenAI’s generative pre-trained transformer (GPT) technology to make it simpler for end users to identify suspicious emails.

IRONSCALES CEO Eyal Benishti said Themis Co-pilot for Microsoft Outlook is based on PhishLLM, a large language model (LLM) that the company hosts on behalf of customers. That capability provides end users with a personal AI coach to help them combat phishing attacks, he added.

AWS Builder Community Hub

That approach also eliminates any concern that corporate data might inadvertently be shared via an LLM used by platform such as ChatGPT that is publicly available to anyone, noted Benishti.

IRONSCALES also plans to add the same capability to the Gmail version of its tool, he said.

Overall, the goal is to augment end users who are the first line of defense against phishing attacks and other forms of business email compromise (BEC) attacks. The challenge organizations have always faced is that not every end user is equally adept at identifying cyberattacks, especially those that, thanks to ChatGPT and other generative AI platforms, are creating emails that are more challenging to detect.

In effect, IRONSCALES is making a case for using AI to combat attacks that themselves may leverage AI to better mimic internal workflows.

Previously, IRONSCALES developed Themis AI, which aggregates data from millions of security events from users, devices and threat intelligence signals to augment cybersecurity analysts. Generative AI technologies are now making it possible to augment end users, as well.

Historically, many BEC attacks were easy to detect because they are often created by someone who doesn’t natively speak the language used by the people that work for the targeted organization. The only way to really thwart those attacks is to rely more on artificial intelligence (AI) to detect anomalies, such as bank accounts that have never previously been used to transfer funds.

Less clear is the degree to which generative AI platforms might one day reduce the need to spend as much on end user cybersecurity training. As AI becomes more adept at identifying, BEC attacks, for example, the need to train end users to recognize them should become less acute.

In addition, the overall level of fatigue experienced by cybersecurity teams should be reduced, as well as the number of attacks that need to be investigated, noted Benishti.

Of course, roles and responsibilities within cybersecurity teams will also evolve as it becomes more apparent that low-level attacks can be thwarted using AI. That should free up members of the cybersecurity team to focus on more sophisticated targeted attacks that often go undetected. Many of these attacks slip through the cracks, simply because too much time is spent on routine attacks that could easily be handled by an AI engine.

It’s still early days as far as the application of generative AI to cybersecurity is concerned, but it’s apparent the nature of the battle is about to fundamentally change. In the meantime, cybersecurity teams would be well-advised to consider not just the state of AI today but also what may be possible tomorrow.

Avatar photo

Michael Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

mike-vizard has 620 posts and counting.See all posts by mike-vizard