Apr 21, 2024 1 min read

Introduction to Prompt Injection

Introduction to Prompt Injection

Join us for an engaging online session as we delve into the intriguing world of prompt injection—a critical vulnerability affecting Large Language Models (LLMs). Ranked as LLM06 in the OWASP LLM Top 10 list, prompt injection poses a significant risk due to LLMs' susceptibility to external input manipulation. Discover how skillfully crafted inputs can trick LLMs into executing unintended and unwanted actions.

Disha Mark III - Prompt Injection
Sunday , 21 Apr  •  7:30 – 8:30 pm (GMT+5:30)
Google Meet joining info
Meet link: https://meet.google.com/wre-wiao-ewy

Slides:

https://www.canva.com/design/DAGDDNYMjQs/3zvstOWgNkbQL4ai3gynIw/edit?utm_content=DAGDDNYMjQs&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton

Resources:

LLM Hacking knowledge base.xlsx
Slide LInk <a href=“https://www.canva.com/design/DAGDDNYMjQs/3zvstOWgNkbQL4ai3gynIw/edit?utm_content=DAGDDNYMjQs&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton”>https://www.canva.com/design/DAGDDNYMjQs/3zvstOWgNkbQL4ai3gynIw/edit?utm_content=DAGDDNYMjQs&utm_campaign=designshar...
Anugrah S R
Anugrah S R
I am a cyber security professional, bug bounty hunter, and blogger. I have a passion for all things related to cyber security and enjoy finding and exploiting vulnerabilities in various applications.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Anugrah SR | #HackLearnDaily.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.