A local REST API that accepts a URL, scrapes its content, and returns a summary generated by a local LLM via Ollama

4 Open Issues Need Help Last updated: Mar 4, 2026

Open Issues Need Help

View All on GitHub

AI Summary: This issue proposes refactoring repeated test cases in the test suite to use `pytest.mark.parametrize`. The goal is to reduce code duplication by consolidating tests that share the same logic but differ only in input values, specifically targeting summary length tests in `tests/routes/test_summarize.py` and prompt content tests in `tests/services/test_ollama.py`. This change aims to improve test maintainability and readability without altering the total number of reported test cases.

Complexity: 2/5
enhancement good first issue

A local REST API that accepts a URL, scrapes its content, and returns a summary generated by a local LLM via Ollama

Python

AI Summary: This issue proposes adding structured logging to all API routes in the application, as currently only the `/summarize` POST endpoint emits logs. The goal is to improve observability by adding `log.info` calls with relevant keyword arguments to the `GET /history`, `GET /history/{id}`, `DELETE /history/{id}`, and `POST /history/{id}/retry` endpoints, using the existing `structlog` library.

Complexity: 2/5
enhancement good first issue

A local REST API that accepts a URL, scrapes its content, and returns a summary generated by a local LLM via Ollama

Python

AI Summary: This issue proposes adding a `reading_time_minutes` field to the API's summary response. This field will be calculated based on the word count of the scraped article content, providing users with an estimate of how long it will take to read. The implementation involves adding a computed property to the ORM model and updating the response schema, with no changes required for routes or the repository.

Complexity: 2/5
enhancement good first issue

A local REST API that accepts a URL, scrapes its content, and returns a summary generated by a local LLM via Ollama

Python

AI Summary: This issue proposes to store the raw scraped text content in the database alongside the generated summary. This will allow for reviewing the original text used for summarization and enable retrying operations with the same content without re-scraping. Changes are needed in the `create` and `update` functions of the summary repository, and the `POST /summarize` and retry handlers in the summarize route.

Complexity: 2/5
enhancement good first issue

A local REST API that accepts a URL, scrapes its content, and returns a summary generated by a local LLM via Ollama

Python