Books / AI Security

AI Security

Defending LLM systems in production

Threat models, prompt injection, data exfiltration: what attacks on LLM systems look like, and how to stop them.

Prompt injection, indirect injection, jailbreaks, data exfiltration, supply-chain attacks on models: the threat surface for LLM applications is new and growing.

SecurityLLMThreat models
Format
eBook · Paperback
Pages
260
Published
2025
Language
EN · FR

Inside the book

  1. Threat models for LLM systems01
  2. Prompt injection in the wild02
  3. Indirect injection and tool poisoning03
  4. Jailbreaks and red-teaming04
  5. Data exfiltration vectors05
  6. Output sanitization patterns06
  7. Supply-chain risks (models, embeddings)07
  8. Detection and observability08