zurück zum Artikel

l+f: GPT-4 reads descriptions of security vulnerabilities and exploits them

Dennis Schirrmacher
Hacker,Artificial,Intelligence,Robot,Danger,Dark,Face.,Cyborg,Binary,Code

(Bild: LuckyStep/Shutterstock.com)

Based on publicly available information, the GPT-4 language model can autonomously exploit software vulnerabilities.

This article was originally published in German and has been automatically translated.
l+f:

Among other things, security researchers have fed the large language model (LLM) GPT-4 with descriptions from security advisories. It was then able to successfully exploit the described vulnerabilities in the majority of cases.

In their paper, the researchers state [1] that they fed LLMs with information on security vulnerabilities. Such details are publicly available in so-called CVE descriptions. These are provided to give admins and security researchers a better understanding of certain attacks in order to effectively secure systems.

The researchers state that GPT-4 successfully **exploited** the vulnerabilities in 87 percent of cases. For other LLMs such as GPT-3.5, the success rate was 0 percent. Without the information from a CVE description, GPT-4 is said to have been successful in only 7 percent of cases.

This is a further step in the direction of automated cyberattacks. Security researchers have already had GPT-3 write convincing phishing emails in the recent past [2].

Mehr Infos

lost+found [3]

The heise-Security section for short and bizarre IT security news.

Alle l+f messages in the overview [4]

(des [5])


URL dieses Artikels:
https://www.heise.de/-9690664

Links in diesem Artikel:
[1] https://arxiv.org/abs/2404.08144
[2] https://www.heise.de/news/Sicherheitsforscher-lassen-KI-GPT-3-ueberzeugende-Phishing-Mails-schreiben-7461383.html?from-en=1
[3] https://www.heise.de/thema/lost%2Bfound
[4] https://www.heise.de/thema/lost%2Bfound
[5] mailto:des@heise.de