The impact of Generative Artificial Intelligence on cybersecurity

0

On July 27th, a roundtable discussion hosted by HackerOne brought together hackers and industry experts to explore the transformative role of generative artificial intelligence (AI) in the realm of cybersecurity. The participants delved into various aspects, including potential novel attack surfaces and the considerations organizations should bear in mind when utilizing large language models (LLMs).

Risks of hasty adoption

Joseph “rez0” Thacker, a professional hacker and senior offensive security engineer at AppOmni, highlighted the importance of careful implementation when using generative AI like ChatGPT to write code. He cautioned that adopting such technology hastily might inadvertently lead to the creation of vulnerabilities. Since ChatGPT lacks the contextual understanding of potential vulnerabilities in the code it produces, organizations must be cautious. For instance, it may not be equipped to generate SQL queries immune to SQL injection, which is a commonly exploited vulnerability. Therefore, organizations should be aware of such limitations before employing generative AI in critical areas.

Risks for companies rushing to embrace Generative AI

Two primary risks emerged for companies seeking to rapidly deploy generative AI products:

Exposing LLMs to external users with access to internal data.

Integrating various tools and plugins with AI features that could access untrusted data, even if it resides within the organization.

Exploiting Generative AI: A Closer Look

Participants acknowledged that generative AI models like GPT don’t create entirely new content but rather reconfigure existing data based on their training. Consequently, individuals with limited technical skills might leverage their own GPT models to learn about code or even build ransomware based on existing threats.

Addressing prompt injection: A cybersecurity challenge

One avenue for cyberattacks on LLM-based chatbots is prompt injection, which manipulates the prompt functions used to call the AI. By seizing control of the context during an LLM function call, attackers can exfiltrate data or manipulate AI behavior for malicious purposes. Even installing prompt packages on computers using ChatGPT can pose risks as the AI may hallucinate fake library names, which threat actors can exploit through reverse engineering.

Deepfakes, Custom Cryptors, and Other Threats

The accessibility of generative AI has lowered the barrier for attackers employing social engineering or DeepFake technology. While cybercriminals may exploit this, it can also be a valuable tool for red teams conducting security assessments. Furthermore, the nature of LLM databases makes it challenging to scrub personally identifying information, potentially exposing sensitive data to unauthorized parties.

The role of Generative AI in cybercrime and defense

Some debate arose over whether generative AI presents entirely new threats or simply repackages existing ones. Experts emphasized the need to distinguish between genuine education and criminal intent when discussing the implications of AI.

Looking ahead: Securing Generative AI

To ensure secure integration of generative AI, experts recommended treating AI models like end users and establishing strict guardrails on data access. Threat modeling and authorization enforcement between end users and backend resources were also identified as crucial practices in securing LLMs.T

The rise of generative AI has introduced both potential risks and benefits to the field of cybersecurity. By approaching this technology thoughtfully and adopting robust security measures, organizations can fully harness the advantages of generative AI while safeguarding against potential threats.

LEAVE A REPLY

Please enter your comment!
Please enter your name here