Open Issues Need Help
View All on GitHub Implement Provider Abstraction Layer for Multi-LLM Support about 2 months ago
AI Summary: This issue proposes implementing a foundational abstraction layer to support multiple LLM and embedding providers (e.g., OpenAI, Groq, Mistral), moving away from a tightly coupled Ollama setup. Key features include automatic fallback capabilities, a provider manager, configuration system, and ensuring legal and regional compliance, all while maintaining backward compatibility.
Complexity:
4/5
enhancement good first issue architecture provider-support
Local, reliable doc Q&A bot that cites sources and refuses when unsure. Powered by Ollama, RAG, and confidence scoring.
Python
#ollama#rag