Open Issues Need Help
View All on GitHubA project that optimizes Wyoming and Whisper for low latency inference using NVIDIA TensorRT
A project that optimizes Wyoming and Whisper for low latency inference using NVIDIA TensorRT
AI Summary: The task is to modify the Docker Compose configuration for the WhisperTRT project to allow users to select the speech-to-text (STT) model used for inference through the docker-compose file, rather than relying on a default model. This involves adding a configuration option to specify the desired model (e.g., `tiny.en`, `base.en`) within the docker-compose.yml file and updating the Docker image to handle this configuration.
A project that optimizes Wyoming and Whisper for low latency inference using NVIDIA TensorRT
AI Summary: Enhance the existing Wyoming and Whisper optimized TensorRT inference project to support model selection (e.g., medium-int8), language selection, and speaker diarization. This involves integrating speaker diarization capabilities into the existing Dockerized application and potentially modifying the API to allow for model and language selection.
A project that optimizes Wyoming and Whisper for low latency inference using NVIDIA TensorRT