Open Issues Need Help
View All on GitHubAI Summary: Create a Docker image for the LibContext project, allowing users to easily run the application in a containerized environment. This involves creating a Dockerfile that sets up the necessary dependencies, installs the application, and defines the execution command.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: The task requires expanding the project's documentation to include detailed installation instructions for the MCP server across a wide range of IDEs and coding tools, including various methods like Smithery, Docker, and Windows-specific installations. This involves creating comprehensive, step-by-step guides for each tool, addressing potential platform-specific challenges.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: The task requires creating a button on the project's README.md that, when clicked, guides the user through adding the LibContext MCP server configuration to their Cursor IDE settings. This likely involves creating a simple HTML button element and potentially JavaScript code to copy the JSON configuration snippet to the clipboard.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: This task requires deploying the LibContext MCP server, which provides AI-ready documentation from local sources, onto the Smithery platform. This involves configuring the server for deployment on Smithery, potentially adapting the existing deployment scripts or creating new ones, and testing the deployed server's functionality to ensure it works correctly within the Smithery environment.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: Enhance LibContext to manage documentation context on a per-project basis, allowing each project to specify the versions of libraries it uses. This involves modifying the CLI to handle project-specific configurations, potentially using a per-project MCP server setup, and ensuring efficient caching to avoid redundant processing of shared library versions.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: Enhance the `libcontext add` command to display the number of tokens used, the estimated cost, and the time taken during the documentation indexing process. This involves integrating token counting functionality with the OpenAI API calls and adding output formatting to the CLI.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: Implement incremental processing in LibContext to improve performance and reduce costs by using Git tree and file hashing to identify and skip unchanged files during indexing. This involves storing Git tree hashes, comparing file hashes between runs, and skipping LLM processing for unchanged content.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: Implement robust error handling for GitHub API interactions within the LibContext project. This involves adding retry mechanisms to handle transient network issues and rate limiting, ensuring the application continues functioning even with temporary API disruptions.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: Implement fallback mechanisms and retry logic for LLM requests in LibContext. This involves allowing users to specify primary and secondary LLM providers (e.g., OpenAI and Azure) and adding retry functionality with exponential backoff to handle transient network issues or API rate limits.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: This task requires extending the LibContext tool to support local LLMs for documentation extraction. This involves detecting or configuring local LLM endpoints (e.g., via environment variables or a config file), and integrating with a suitable proxy (like LiteLLM) to handle the communication with these local models, ensuring compatibility with OpenAI's API. The focus should be on OpenAI-compatible APIs initially.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: Implement configurable embedding settings in LibContext, allowing users to specify the embedding model (e.g., `text-embedding-3-small`), dimensions, and other parameters. This requires adapting database migrations to handle the variable dimensions of the embeddings.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: Implement user-configurable settings for the LLM used in LibContext's documentation extraction process. This includes allowing users to specify a custom system prompt, and adjust parameters like the LLM model name and temperature.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: Modify the LibContext application to prioritize using a GitHub personal access token from the `GITHUB_TOKEN` environment variable, falling back to the CLI argument only if the environment variable is not set. This involves updating the code to check for the environment variable before processing CLI arguments and handling potential errors gracefully.
From GitHub docs to LLM-ready context, optimized for code generation.
AI Summary: Implement a CLI progress bar during the `libcontext add` command, showing stages like file reading, snippet extraction, and embedding. This will improve user experience, especially for large repositories, by providing feedback on the indexing process.
From GitHub docs to LLM-ready context, optimized for code generation.