6 Open Issues Need Help Last updated: Mar 9, 2026

Open Issues Need Help

View All on GitHub

AI Summary: The global HTTP client `_client` in `server.py` is never properly closed, leading to a critical connection leak. This issue results in indefinitely open TCP sockets, memory accumulation from `httpx` buffers, and eventual connection timeouts. Although a `close()` method exists in `portal.py`, it is never invoked.

Complexity: 2/5
bug good first issue performance

AI Summary: The current `fastmcp` dependency constraint (`>=0.4.0`) in `pyproject.toml` is too permissive, risking the installation of incompatible future major versions or outdated, potentially buggy versions. The project implicitly uses `fastmcp==3.1.0` via `uv.lock`, and the goal is to restrict the version to prevent breaking changes and ensure stability.

Complexity: 1/5
good first issue auto-fix security dependencies

AI Summary: The `download_dataset` tool consistently returns file content encoded in base64, leading to unreadable output in the UI, a 33% increase in response size, and timeouts for large files. This enhancement proposes a hybrid mode, likely involving returning URLs instead of base64 for larger files or specific content types, to address these problems and improve efficiency.

Complexity: 4/5
enhancement good first issue performance auto-fix

AI Summary: The `preview_dataset` tool currently rejects HTML resources, preventing users from previewing datasets that contain valuable data tables embedded within HTML pages, particularly from the OpenDataForAfrica portal. The issue stems from HTML not being included in the list of supported formats in the `src/opendata_bj/tools/datasets.py` file. The goal is to enable the preview and exploitation of these HTML-based data tables.

Complexity: 3/5
bug enhancement good first issue auto-fix

AI Summary: The issue addresses the lack of client-side pagination in `get_all_datasets()`, preventing users from retrieving all results when the API returns more than the specified limit. The proposed solution involves adding an `offset` parameter to `get_all_datasets()` and introducing a new `iter_all_datasets()` method that uses this parameter to iteratively fetch and yield all datasets in batches.

Complexity: 3/5
enhancement help wanted auto-fix