Open Issues Need Help
View All on GitHubAI Summary: The user is asking how prompt caching is implemented within the plugin, specifically when using OpenRouter as a provider. They want to know if the plugin enables prompt caching by default, as it's important for reducing token costs with certain hosted models.
THE Copilot in Obsidian
AI Summary: The user is requesting that the "thinking" token display, currently supported for OpenAI Format providers like LM Studio and Ollama, be extended to also work with llama.cpp/llama-swap providers. This would enhance the user experience by providing visual feedback during model inference.
THE Copilot in Obsidian
AI Summary: This issue proposes adding Kilo Gateway as a new LLM provider for Obsidian Copilot. The user is a contributor to the Kilo project and is willing to implement the integration themselves. Kilo Gateway is described as a backend for OpenRouter, where the user already has credits.
THE Copilot in Obsidian
THE Copilot in Obsidian