AI-powered voice assistant with browser automation, wake word detection, and LLM integration

1 Open Issue Need Help Last updated: Jan 30, 2026

Open Issues Need Help

View All on GitHub
AI/ML Personal Assistants

AI Summary: This issue proposes a proof-of-concept (PoC) to set up an OpenAI-compatible server using vLLM on an NVIDIA GPU. The goal is to launch a selected LLM model (like Qwen2.5 or Llama) with a single command and verify its functionality and performance through basic client requests and latency checks.

Complexity: 2/5
good first issue priority:P0 area:llm vllm llm-backend

AI-powered voice assistant with browser automation, wake word detection, and LLM integration

Python