Lessons for CISOs from OWASP’s Top 10 LLMs

COMMENT

OWASP recently released its top 10 list for Large Language Model (LLM) applications., in an effort to educate the industry on potential security threats to be aware of when implementing and managing LLMs. This release represents a notable step in the right direction for the security community, as developers, designers, architects and managers now have 10 areas to clearly focus on.

Similar to National Institute of Standards and Technology (NIST) Framework. AND Cybersecurity and Infrastructure Security Agency (CISA) guidelines. intended for the security industry, the OWSAP list creates an opportunity for better alignment within organizations. With this knowledge, Chief Information Security Officers (CISOs) and security leaders can ensure that the best security precautions are taken regarding the use of LLM technologies that are rapidly evolving. LLMs are just code. We need to apply what we’ve learned about code authentication and authorization to prevent abuse and compromise. This is why identity provides the kill switch for AI—the ability to authenticate and authorize each model and its actions, and to terminate it in the event of misuse, compromise, or error.

Adversaries are exploiting gaps in organizations

As security professionals, we have it’s been talked about for a long time what adversaries are doing, such as data poisoning, supply chain vulnerabilities, overreach and theft, and more. This OWASP 10 for LLM is evidence that the industry is recognizing where the risks lie. To protect our organizations, we must quickly correct course and be proactive.

Generative artificial intelligence (GenAI) is shining a spotlight on a new wave of software risks that are rooted in same capabilities which made him powerful in the first place. Every time a user asks an LLM a question, it scans countless web locations in an attempt to provide an answer or AI-generated output. While every new technology brings new risks, LLMs are particularly concerning because they are very different from the tools we are used to.

Nearly all of the top 10 LLM threats focus on compromising authentication for identities used in models. The different attack methods cover a wide range, affecting not only the identities of model inputs but also the identities of the models themselves, as well as their outputs and actions. This has a knock-on effect and requires authentication in the code signing and build processes to stop the vulnerability at the source.

Authentication of training and models to prevent poisoning and abuse

With more machines communicating with each other than ever before, it is necessary to train and authenticate how identities will be used to send information and data from one machine to another. The model must authenticate the code so that it can mirror that authentication on other machines. If there is a problem with the input or the initial model, as models are vulnerable and something to keep an eye on, a domino effect will occur. Models and their inputs must be authenticated. Otherwise, security team members will wonder if this is the right model they trained or if it uses the plugins they approved. When models can use APIs and authentication from other models, authorization must be well defined and managed. Each model must be authenticated with a unique identity.

We watched this show recently with The collapse of AT&T, which was dubbed a “software misconfiguration,” leaving thousands of people without cell phone service during their morning commute. The same week, Google encountered a very different but equally troubling bug. Google’s Gemini image generator has been misrepresented historical images, causing concerns about diversity and bias due to artificial intelligence. In both cases, the root of the problem was the data used to train the GenAI models and LLMs, as well as the lack of guardrails around them. To prevent problems like this in the future, AI companies need to spend more time and money to properly train models and better inform data.

To design a secure and bulletproof system, CISOs and security leaders should design a system where the model works with other models. This way, an adversary stealing a model does not collapse the entire system and allows for a kill-switch approach. You can turn off a model and continue working and protecting the company’s intellectual property. This positions security teams much stronger and prevents further damage.

Act on the lessons in the list

For security leaders, I recommend following OWASP’s guidance and asking your CISO or C-level executives what your organization’s score is on these vulnerabilities overall. This framework makes us all more accountable for providing market-level security insights and solutions. It’s encouraging to have something to show our CEO and board of directors to illustrate how we’re doing in terms of risk preparedness.

As we continue to see risks arise with LLMs and AI customer service tools, as we just did Air Canada’s chatbot offering a refund to a traveler, companies will be held responsible for errors. It’s time to start regulating LLMs to ensure they are properly trained and ready to handle business deals that could impact profits.

In conclusion, this list serves as a great framework for the growing web vulnerabilities and risks we need to watch out for when using LLMs. While more than half of the top 10 risks are substantially mitigated and require the AI ​​kill switch, companies will need to evaluate their options when implementing new LLMs. If the right tools are available to authenticate inputs and models, as well as model actions, companies will be better equipped to leverage the idea of ​​the AI ​​kill switch and prevent further destruction. While this may seem daunting, there are ways to protect your organization from AI and LLM infiltrating your network.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *