- Add input sanitization to prevent LLM prompt injection
(escapes quotes, backslashes, replaces newlines)
- Add MaxLength(500) validation to DTO to prevent DoS
- Add entity validation to filter malicious LLM responses
- Add confidence validation to clamp values to 0.0-1.0
- Make LLM model configurable via INTENT_CLASSIFICATION_MODEL env var
- Add 12 new security tests (total: 72 tests, from 60)
Security fixes identified by code review:
- CVE-mitigated: Prompt injection via unescaped user input
- CVE-mitigated: Unvalidated entity data from LLM response
- CVE-mitigated: Missing input length validation
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>