Chatgpt open AI give help for Software Supply-Chain Attackers

The software development community faces new security challenges as researchers highlight the potential risks associated with the "hallucination" tendency of ChatGPT, a prominent large language model (LLM). Security company Vulcan, represented by Ortal Keizman and Yair Divinsky, has cautioned that this behavior could be exploited by attackers to introduce malicious packages into development environments.

The researchers conducted an investigation into how ChatGPT could serve as a vector for software supply-chain attacks. They discovered instances where ChatGPT generated URLs, references, code libraries, and functions that do not actually exist—a phenomenon referred to as "hallucinations."

The concern lies in the possibility that ChatGPT could fabricate non-existent code libraries (packages). Exploiting this behavior, attackers could leverage these hallucinations to distribute malicious packages, bypassing conventional techniques like typosquatting or masquerading. Although techniques such as typosquatting or masquerading can typically be detected, Vulcan suggests that if an attacker provides a package that replaces the hallucination, victims could be deceived into unknowingly downloading and using the malicious package.

These findings highlight the need for heightened vigilance within the software development community to safeguard against potential vulnerabilities stemming from the "hallucination" aspect of large language models like ChatGPT.

In a technique referred to as "AI package hallucination," researchers have shed light on the potential dangers of using ChatGPT, an AI chatbot, to find software packages for problem-solving. The researchers discovered that some responses generated by ChatGPT may include hallucinations, complete with false links, leading to non-existent packages.

This poses a significant risk as attackers can take advantage of the situation. If ChatGPT recommends packages that are not published in legitimate repositories, malicious actors can seize the opportunity to post a malicious package using the hallucinated name. Subsequently, when a user asks a similar question, ChatGPT may recommend the newly created malicious package, potentially leading to unintended consequences.

To assess the extent of the issue, the researchers conducted tests using popular questions from platforms like StackOverflow, focusing on programming languages such as Python and Node.js. The results revealed concerning figures: out of 201 questions related to Node.js, 40 answers referenced over 50 non-existent packages, while 227 questions about Python drew answers pointing to more than 100 non-existent packages.

These findings highlight the significance of addressing the vulnerabilities associated with AI package hallucination in chatbot systems like ChatGPT. Safeguarding against false recommendations and ensuring the integrity of package repositories are crucial aspects in maintaining the security and reliability of software development ecosystems.

Leave a Reply