SECURITY
AI
Security
A. K. Ngaleu
Books / AI Security
AI Security
Defending LLM systems in production
“Threat models, prompt injection, data exfiltration: what attacks on LLM systems look like, and how to stop them.”
Prompt injection, indirect injection, jailbreaks, data exfiltration, supply-chain attacks on models: the threat surface for LLM applications is new and growing.
SecurityLLMThreat models
Format
eBook · Paperback
Pages
260
Published
2025
Language
EN · FR
Inside the book
- Threat models for LLM systems01
- Prompt injection in the wild02
- Indirect injection and tool poisoning03
- Jailbreaks and red-teaming04
- Data exfiltration vectors05
- Output sanitization patterns06
- Supply-chain risks (models, embeddings)07
- Detection and observability08