US Department of Defense urges hackers to hack ‘artificial intelligence’

Digital security, secure coding

The limits of current artificial intelligence must be tested before their results can be relied on

DEF CON 31: US Department of Defense urges hackers to hack the

Dr. Craig Martell, Chief Digital and Artificial Intelligence Officer, US Department of Defense appealed to the audience at DEF CON 31 in Las Vegas to come and hack large language models (LLMs). It’s not often you hear a government official call for action like this. So why did he make such a challenge?

LLM as a trending topic

During Black Hat 2023 and DEF CON 31, artificial intelligence (AI) and the use of LLM have been a trending topic, and given the hype since the release of ChatGPT just nine months ago, it’s not that surprising. Dr. Martell, also a university professor, provided an interesting explanation and thought-provoking perspective; it certainly involved the audience.

First, he presented the concept that it is next word prediction, when a dataset is created, the job of LLM is to predict what the next word should be. For example, in LLMs used for translation, if you take the above words when translating from one language to another, there are limited options – perhaps a maximum of five – that are semantically similar, so it’s a matter of choosing the most likely one given the previous word sentences. We are used to seeing predictions on the internet so this is nothing new, for example when you shop on Amazon, or watch a movie on Netflix, both systems will offer you their prediction on the next product to consider, or what to watch next.

If you put this in the context of creating computer code, it becomes easier as there is a strict format that the code must follow and therefore the output is likely to be more accurate than trying to provide normal conversational language.

AI hallucinations

The biggest problem with LLMs is hallucinations. For those less familiar with this term in relation to AI and LLMs, a hallucination is when the model returns something “false”.

Dr. Martell gave a good example regarding himself, he asked ChatGPT “who is Craig Martell”, and returned a response stating that Craig Martell was the character played by Stephen Baldwin in The Usual Suspects. This is incorrect, as a few moments with a non-AI search engine should convince you. But what happens when you can’t control the output or don’t have the mindset to do so? We therefore end up admitting an answer “from the artificial intelligence” which is accepted as correct regardless of the facts. Dr. Martell described those who don’t check the output as lazy, while this may seem a little strong, I think it drives home the point that all output should be validated using another source or method.

Related: Black Hat 2023: ‘Teenage’ AI Not Enough for Cyber ​​Threat Intelligence

The big question posed by the presentation is: “How many hallucinations are acceptable and under what circumstances?” In the example of a battlefield decision that may involve life or death situations, then “zero hallucinations” might be the right answer, while in the context of an English to German translation then 20% might be ok. The acceptable number is really the big question.

Humans are still necessary (for now)

In the current LLM module, it has been suggested that a human should be involved in validation, meaning that one or more models should not be used to validate the output of another.

Human validation uses more than logic, if you see a picture of a cat and a system tells you it’s a dog, then you know it’s wrong. When a baby is born it recognizes faces, understands hunger, these abilities are beyond the logic available in today’s world of artificial intelligence. The presentation highlighted that not all humans will understand that the result of “artificial intelligence” needs to be questioned, but will accept it as an authoritative answer which will then cause significant problems depending on the scenario in which it is accepted.

In summary, the presentation concluded with what many of us will have already deduced; the technology has been made public and is seen as an authority when in reality it is in its infancy and still has a lot to learn. That’s why Dr. Martell then challenged the audience to “go break those things, tell us how they break, tell us the dangers, I really need to know.” If you are interested in learning how to provide feedback, the Department of Defense has created a blueprint that can be found at www.dds.mil/taskforcelima.

Before you go: Black Hat 2023: Cyberwar fire and forget me not

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *