This is a red-team tactic, that attempts to poison a LLM’s prompt, allowing it to do anything the attacker asks for. Techniques Authority Prompt Injection Repeating Word Prompt Injection LLM Jailbreaking AI Captioning Attack Hypothetical Scenario Prompt Injection Fill in the Blank Prompt Injection Payload Splitting Prompt Injection Frog-Boil Attack Mapping Attack Token Smuggling ASCII Smuggling