Guardrails for LLMs: detect and block hallucinated tool calls to improve safety and reliability.

agent-safety ai ai-safety hallucination-detection language-models llms machine-learning middleware prompt-engineering tool-calling toolformer
2 Open Issues Need Help Last updated: Jun 29, 2025

Open Issues Need Help

View All on GitHub

AI Summary: Enhance the HallucinationGuard Go SDK to support context-aware and conditional policies. This involves extending the YAML policy format to include conditions based on user roles, session context, parameter values, and other metadata. The SDK's validation API needs to accept a context object, and the policy engine must be updated to evaluate these conditions using a safe expression evaluator. Comprehensive testing and updated documentation are also required.

Complexity: 4/5
enhancement help wanted question

Guardrails for LLMs: detect and block hallucinated tool calls to improve safety and reliability.

Go
#agent-safety#ai#ai-safety#hallucination-detection#language-models#llms#machine-learning#middleware#prompt-engineering#tool-calling#toolformer

AI Summary: Create unit tests for the HallucinationGuard Go SDK using the Go testing package. This involves writing test functions to verify the functionality of schema loading, policy enforcement, and validation result generation, covering various scenarios including successful and failed validations.

Complexity: 4/5
help wanted good first issue

Guardrails for LLMs: detect and block hallucinated tool calls to improve safety and reliability.

Go
#agent-safety#ai#ai-safety#hallucination-detection#language-models#llms#machine-learning#middleware#prompt-engineering#tool-calling#toolformer