Apr 29, 2025 Literature Review: Bypassing Safety Guardrails in LLMs Using Humor Apr 29, 2025 Literature Review: Siege: Autonomous Multi-Turn Jailbreaking of Large Language Models with Tree Search Apr 29, 2025 Literature Review: Sugar-Coated Poison: Benign Generation Unlocks LLM Jailbreaking