A comprehensive framework for benchmarking, profiling, and analyzing the performance and quality of local LLMs via Ollama.

1 Open Issue Need Help Last updated: Jul 29, 2025

Open Issues Need Help

View All on GitHub

AI Summary: The task is to extend the existing local LLM benchmarking framework to support AMD and Intel GPUs for comprehensive performance profiling. This involves researching and integrating alternative libraries to `pynvml` that provide GPU monitoring capabilities for non-NVIDIA hardware, ensuring compatibility and accurate data collection across different GPU vendors.

Complexity: 4/5
enhancement help wanted

A comprehensive framework for benchmarking, profiling, and analyzing the performance and quality of local LLMs via Ollama.

Python