Open Issues Need Help
View All on GitHub Security • LLM Security
[Feature] Prompt injection analysis about 2 months ago
AI Summary: Develop an MCP tool within the VeritasAI server.py to detect and prevent basic (level 1) prompt injection attacks. This involves analyzing user inputs before they reach the LLM to identify obvious injection attempts based on known patterns and techniques. The solution should integrate seamlessly with the existing VeritasAI framework and leverage the Model Context Protocol.
Complexity:
3/5
help wanted good first issue
Security • LLM Security
Test Claude code in chmod 777 and chmod 555 about 2 months ago
AI Summary: Test the VeritasAI LLM security system by instructing Claude to write files to directories with different permission levels (chmod 777 and chmod 555). Analyze the system's response to these attempts and brainstorm potential methods to bypass such security controls, considering the provided access control information.
Complexity:
4/5
help wanted good first issue question