Open Issues Need Help
View All on GitHubAI Summary: Implement a new feature in the real-time voice conversation server to allow users to specify a custom endpoint for receiving LLM responses. This involves modifying the LLM class to handle requests to this custom endpoint, potentially requiring changes to the configuration and error handling to accommodate this new option. The goal is to enable the server to function as a microservice, handling custom business logic through the specified endpoint.
real-time websocket based server for human like conversation with user and llm/bot
AI Summary: Conduct end-to-end testing of the real-time voice conversation server. This involves checking for errors and then testing the conversation flow using the Expo SDK to identify areas for improvement. The testing should cover all aspects of the system, from audio processing and transcription to LLM interaction and text-to-speech.
real-time websocket based server for human like conversation with user and llm/bot