Akto launches GenAI proactive security testing solution

PRESS RELEASE

San Francisco, California – February 13, 2024

From 77% of organizations have adopted or are exploring AI in some way, pushing for a more efficient and automated workflow. With the growing reliance on GenAI models and language learning models (LLMs) like ChatGPT, the need for robust security measures has become paramount.

Akto, a leading API security company, is proud to announce the launch of its revolutionary GenAI security testing solution. This cutting-edge technology marks a significant milestone in the field of AI security, making Akto the world’s first proactive GenAI security testing platform.

“Akto has a new ability to scan APIs that leverage AI technology and this is critical to the future of application security. I have invested in building security training and applications for AI early on, and I’m excited to see other security companies do the same for security assessment of AI technologies.” – Jim Manico, former OWASP Global Board Member, Secure Coding trainer.

On average, an organization uses 10 GenAI models. Often most LLMs in production will receive data indirectly via API. This means tons and tons of sensitive data is processed by LLM APIs. Ensuring the security of these APIs will be critical to protecting user privacy and preventing data leaks. There are several ways in which LLMs can be abused today, leading to leaks of sensitive data.

  1. Rapid Injection Vulnerabilities – The risk of unauthorized timely injections, where malicious inputs can manipulate the output of LLM, has become a major concern.

  2. Denial of Service (DoS) threats – LLMs are also susceptible to DoS attacks, where the system is overloaded with requests, resulting in service disruptions. There has been an increase in DoS incidents reported against LLM APIs over the past year.

  3. Overreliance on LLM outputs – Over-reliance on LLMs without adequate verification mechanisms has led to cases of inaccuracies and data leaks. Organizations are encouraged to implement robust validation processes, as the industry sees an increase in data leak incidents due to over-reliance on LLMs.

“Protecting GenAI systems requires a multifaceted approach with the need to protect not only the AI ​​from external inputs but also external systems that depend on their outputs. “- OWASP Top 10 for LLM AI Applications Core team member.

On March 20, 2023, a disruption with OpenAI’s AI tool, ChatGPT. The outage was caused by a vulnerability in an open source library, which may have exposed some customers’ payment information. Very recently, on January 25, 2024, a critical vulnerability was discovered in Nothing LLM (8,000 Github Stars) that turns any document or content into a context that any LLM can use while chatting. A Unauthenticated API path (file export) may allow attackers to crash the server by causing a denial of service attack. These are just a few examples of security incidents related to the use of LLM models.

Akto’s GenAI Security Testing solution addresses these challenges head-on. Leveraging advanced testing methodologies and cutting-edge algorithms, Akto provides comprehensive security assessments for GenAI models, including LLM. The solution incorporates a wide range of innovative features, including over 60 meticulously designed test cases covering various aspects of GenAI vulnerabilities such as timely injection, over-reliance on specific data sources, and more. These test cases were developed by Akto’s team of GenAI security experts, ensuring the highest level of protection for organizations implementing GenAI models. Akto, a San Francisco-based company, is a leading API security company specializing in providing cutting-edge solutions to protect APIs from security vulnerabilities. With a team of AI security experts and a passion for innovation, Akto is committed to enabling organizations to protect their applications from attacks and ensure safe use of GenAI APIs. Find out more about Akto Here.

Currently, security teams manually test all LLM APIs for flaws before release. Due to the time sensitivity of product releases, teams can only test certain vulnerabilities. As hackers continue to find more creative ways to exploit LLMs, security teams must find an automated way to protect LLMs at scale.

“Often the input for an LLM comes from an end user or the output is shown to the end user or both. The tests attempt to exploit LLM vulnerabilities through different encoding methods, separators and markers. This particularly detects weak security practices where developers hardcode the input or place special markings around the input. “- Ankush Jain, CTO at Akto.io

AI security testing also finds weak security measures against sanitizing LLM output. It is intended to detect attempts to inject malicious code for remote execution, cross-site scripting (XSS), and other attacks that could allow attackers to extract session tokens and system information. Additionally, Akto also checks whether LLMs are likely to generate false or irrelevant reports.

“From Prompt Injection (LLM:01) to Overreliance (LLM09) and new vulnerabilities and breaches every day and we build systems that are secure by default; It is critical to timely test systems for these evolving threats. I’m excited to see what Akto has in store for my LLM projects” – OWASP Top 10 for LLM AI Applications Core team member.

To further highlight the importance of GenAI security, a recent September 2023 survey by Gartner revealed that 34% of organizations are already using or implementing artificial intelligence (AI) application security tools to mitigate risks associated with generative artificial intelligence (GenAI). Over half (56%) of respondents said they are exploring such solutions as well, highlighting the critical need for robust security testing solutions like those from Akto.

To showcase the capabilities and importance of Akto’s GenAI Security Testing solution, Akto founder and CEO, Ankita, will present at the prestigious Austin API Summit 2024. The session, titled “LLM API Security,” will delve deeper into the problem statement, highlight real-world examples, and demonstrate how solutions like Akto’s provide a robust defense against AI-related vulnerabilities.

As organizations strive to harness the power of AI, Act is at the forefront of ensuring the security and integrity of these transformative technologies. The launch of the GenAI Security Testing solution reinforces their commitment to innovation and their dedication to enabling organizations to embrace GenAI with confidence.

About Akto

Akto, a San Francisco-based company, is a leading API security company specializing in providing cutting-edge solutions to protect APIs from security vulnerabilities. With a team of AI security experts and a passion for innovation, Akto is committed to enabling organizations to protect their applications from attacks and ensure safe use of GenAI APIs. Find out more about Akto Here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *