Open Issues Need Help
View All on GitHub Support the validation for reasoning about 2 months ago
AI Summary: Enhance the Guardrails framework to support validation of the 'reasoning_content' field in LLM responses. This involves modifying the Runner/AsyncRunner to handle reasoning content and adding a 'reasoning_content' field to the LLMResponse class. The goal is to enable validation of both the main response and the reasoning provided by the LLM, preventing issues with streaming responses and filtering unwanted content within the reasoning.
Complexity:
4/5
enhancement help wanted
Adding guardrails to large language models.
Python
#ai#foundation-model#gpt-3#llm#openai