2 Open Issues Need Help Last updated: Jul 18, 2025

Open Issues Need Help

View All on GitHub
Security LLM Security

AI Summary: Develop an MCP tool within the VeritasAI server.py to detect and prevent basic (level 1) prompt injection attacks. This involves analyzing user inputs before they reach the LLM to identify obvious injection attempts based on known patterns and techniques. The solution should integrate seamlessly with the existing VeritasAI framework and leverage the Model Context Protocol.

Complexity: 3/5
help wanted good first issue

Native LLM Security

Python
Security LLM Security

AI Summary: Test the VeritasAI LLM security system by instructing Claude to write files to directories with different permission levels (chmod 777 and chmod 555). Analyze the system's response to these attempts and brainstorm potential methods to bypass such security controls, considering the provided access control information.

Complexity: 4/5
help wanted good first issue question

Native LLM Security

Python