This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.

8 stars 19 forks 8 watchers Python Apache License 2.0
automation mlcflow mlperf mlperf-automations mlperf-inference
3 Open Issues Need Help Last updated: Sep 5, 2025

Open Issues Need Help

View All on GitHub
enhancement good first issue

This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.

Python
#automation#mlcflow#mlperf#mlperf-automations#mlperf-inference

AI Summary: The task is to fix a broken dataset link for the llama3-8b model within the MLPerf Inference benchmark automation scripts. This likely involves updating a configuration file or script to point to the correct dataset location, referencing pull request #2300 in the MLPerf Inference repository for guidance.

Complexity: 3/5
good first issue mlperf-work

This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.

Python
#automation#mlcflow#mlperf#mlperf-automations#mlperf-inference

AI Summary: The task is to add support to the MLPerf Inference automation scripts for downloading the OpenAI Whisper large-v3 model from Hugging Face. This involves integrating the provided `git lfs` and `git clone` commands into the existing model download functionality, ensuring compatibility with the MLCFlow framework.

Complexity: 3/5
good first issue mlperf-work

This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.

Python
#automation#mlcflow#mlperf#mlperf-automations#mlperf-inference