Open Issues Need Help
View All on GitHubAI Summary: The task is to modify the provided Docker Compose example for Paddler, a load balancer for llama.cpp, to allow users to specify a pre-downloaded model instead of downloading one from Hugging Face during the example's execution. This likely involves adjusting the Docker Compose file to mount a volume containing the pre-downloaded model.
Complexity:
2/5
good first issue help wanted
Stateful load balancer custom-tailored for llama.cpp 🏓🦙
Rust
#ai#llamacpp#llm#llmops#load-balancer