A project that optimizes Wyoming and Whisper for low latency inference using NVIDIA TensorRT

4 Open Issues Need Help Last updated: Sep 14, 2025

Open Issues Need Help

View All on GitHub
bug help wanted good first issue

A project that optimizes Wyoming and Whisper for low latency inference using NVIDIA TensorRT

Python
bug help wanted good first issue

A project that optimizes Wyoming and Whisper for low latency inference using NVIDIA TensorRT

Python

AI Summary: The task is to modify the Docker Compose configuration for the WhisperTRT project to allow users to select the speech-to-text (STT) model used for inference through the docker-compose file, rather than relying on a default model. This involves adding a configuration option to specify the desired model (e.g., `tiny.en`, `base.en`) within the docker-compose.yml file and updating the Docker image to handle this configuration.

Complexity: 3/5
enhancement help wanted

A project that optimizes Wyoming and Whisper for low latency inference using NVIDIA TensorRT

Python

AI Summary: Enhance the existing Wyoming and Whisper optimized TensorRT inference project to support model selection (e.g., medium-int8), language selection, and speaker diarization. This involves integrating speaker diarization capabilities into the existing Dockerized application and potentially modifying the API to allow for model and language selection.

Complexity: 4/5
enhancement help wanted

A project that optimizes Wyoming and Whisper for low latency inference using NVIDIA TensorRT

Python