Prompt-injection-detector
Other tools
-
7031Released 4h agoFree + from $49/mo
Nabil A.๐ ๏ธ 1 toolNov 4, 2025We built ModelRed because most teams don't test AI products for security vulnerabilities until something breaks. ModelRed continuously tests LLMs for prompt injections, data leaks, and exploits. Works with any provider and integrates into CI/CD. Happy to answer questions about AI security!

