Adding guardrails to large language models.

ai foundation-model gpt-3 llm openai
1 Open Issue Need Help Last updated: Jul 8, 2025

Open Issues Need Help

View All on GitHub

AI Summary: Enhance the Guardrails framework to support validation of the 'reasoning_content' field in LLM responses. This involves modifying the Runner/AsyncRunner to handle reasoning content and adding a 'reasoning_content' field to the LLMResponse class. The goal is to enable validation of both the main response and the reasoning provided by the LLM, preventing issues with streaming responses and filtering unwanted content within the reasoning.

Complexity: 4/5
enhancement help wanted

Adding guardrails to large language models.

Python
#ai#foundation-model#gpt-3#llm#openai