Open Issues Need Help
View All on GitHubAI Summary: Implement a pre-commit hook for the Llama Stack project to enforce the presence of the `upload-time` field in the `uv.lock` file. This will prevent unnecessary diffs and reduce the burden on code reviewers by ensuring consistency in the project's configuration files.
AI Summary: Create a pre-commit hook that checks Python code to ensure the project's custom logger (`llama_stack.log`) is used instead of the standard `logging` module. This aims to enforce consistent logging practices across the codebase.
AI Summary: The task is to modify the 'check-workflows-use-hashes' pre-commit check within the Llama Stack project. The current implementation outputs errors in a non-standard format. The goal is to refactor the check to utilize GitHub's workflow command output format, enabling errors to be displayed directly on the code page within GitHub's pull request interface, improving the user experience for developers submitting pull requests.
AI Summary: Add type hints (using Mypy) to the `llama_stack/distribution`, `llama_stack/apis`, and `llama_stack/cli` directories to improve code maintainability and reduce runtime errors. This involves adding type annotations to functions, variables, and classes within these directories to ensure type correctness.
AI Summary: Automate the update of Llama Stack's model listings by creating a weekly CI job that checks each provider's model list (likely via their APIs or web scraping) and updates the `models.py` files accordingly. This avoids manual updates and ensures the list remains current.
AI Summary: Migrate existing unit tests in the Llama Stack project from the `unittest` framework to `pytest`. This involves refactoring the test files to use pytest's syntax and removing the need for `unittest.TestCase` classes and associated setup/teardown methods. The goal is to improve consistency within the project, as most of the codebase already uses pytest.
AI Summary: Standardize error messages for unsupported models in the Llama Stack framework. Currently, some providers return a simple `ValueError`, while others provide a more informative error including a list of supported models. The task involves modifying the error handling within Llama Stack to consistently return the more descriptive error message listing supported models when an unsupported model is specified.