From GitHub docs to LLM-ready context, optimized for code generation.

14 Open Issues Need Help Last updated: Jul 23, 2025

Open Issues Need Help

View All on GitHub
feat: Docker image about 1 month ago

AI Summary: Create a Docker image for the LibContext project, allowing users to easily run the application in a containerized environment. This involves creating a Dockerfile that sets up the necessary dependencies, installs the application, and defines the execution command.

Complexity: 3/5
good first issue help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: The task requires expanding the project's documentation to include detailed installation instructions for the MCP server across a wide range of IDEs and coding tools, including various methods like Smithery, Docker, and Windows-specific installations. This involves creating comprehensive, step-by-step guides for each tool, addressing potential platform-specific challenges.

Complexity: 4/5
good first issue help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: The task requires creating a button on the project's README.md that, when clicked, guides the user through adding the LibContext MCP server configuration to their Cursor IDE settings. This likely involves creating a simple HTML button element and potentially JavaScript code to copy the JSON configuration snippet to the clipboard.

Complexity: 2/5
good first issue help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript
feat: Publish on Smithery about 1 month ago

AI Summary: This task requires deploying the LibContext MCP server, which provides AI-ready documentation from local sources, onto the Smithery platform. This involves configuring the server for deployment on Smithery, potentially adapting the existing deployment scripts or creating new ones, and testing the deployed server's functionality to ensure it works correctly within the Smithery environment.

Complexity: 4/5
good first issue help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: Enhance LibContext to manage documentation context on a per-project basis, allowing each project to specify the versions of libraries it uses. This involves modifying the CLI to handle project-specific configurations, potentially using a per-project MCP server setup, and ensuring efficient caching to avoid redundant processing of shared library versions.

Complexity: 4/5
help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: Enhance the `libcontext add` command to display the number of tokens used, the estimated cost, and the time taken during the documentation indexing process. This involves integrating token counting functionality with the OpenAI API calls and adding output formatting to the CLI.

Complexity: 4/5
help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: Implement incremental processing in LibContext to improve performance and reduce costs by using Git tree and file hashing to identify and skip unchanged files during indexing. This involves storing Git tree hashes, comparing file hashes between runs, and skipping LLM processing for unchanged content.

Complexity: 4/5
help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: Implement robust error handling for GitHub API interactions within the LibContext project. This involves adding retry mechanisms to handle transient network issues and rate limiting, ensuring the application continues functioning even with temporary API disruptions.

Complexity: 4/5
help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: Implement fallback mechanisms and retry logic for LLM requests in LibContext. This involves allowing users to specify primary and secondary LLM providers (e.g., OpenAI and Azure) and adding retry functionality with exponential backoff to handle transient network issues or API rate limits.

Complexity: 4/5
help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript
feat: Support local LLMs about 1 month ago

AI Summary: This task requires extending the LibContext tool to support local LLMs for documentation extraction. This involves detecting or configuring local LLM endpoints (e.g., via environment variables or a config file), and integrating with a suitable proxy (like LiteLLM) to handle the communication with these local models, ensuring compatibility with OpenAI's API. The focus should be on OpenAI-compatible APIs initially.

Complexity: 4/5
help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: Implement configurable embedding settings in LibContext, allowing users to specify the embedding model (e.g., `text-embedding-3-small`), dimensions, and other parameters. This requires adapting database migrations to handle the variable dimensions of the embeddings.

Complexity: 4/5
help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: Implement user-configurable settings for the LLM used in LibContext's documentation extraction process. This includes allowing users to specify a custom system prompt, and adjust parameters like the LLM model name and temperature.

Complexity: 4/5
good first issue help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: Modify the LibContext application to prioritize using a GitHub personal access token from the `GITHUB_TOKEN` environment variable, falling back to the CLI argument only if the environment variable is not set. This involves updating the code to check for the environment variable before processing CLI arguments and handling potential errors gracefully.

Complexity: 3/5
good first issue help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript

AI Summary: Implement a CLI progress bar during the `libcontext add` command, showing stages like file reading, snippet extraction, and embedding. This will improve user experience, especially for large repositories, by providing feedback on the indexing process.

Complexity: 4/5
good first issue help wanted

From GitHub docs to LLM-ready context, optimized for code generation.

TypeScript