Repository for OpenVINO's extra modules

arm inference-engine java nvidia-gpu openvino pytorch
3 Open Issues Need Help Last updated: Sep 11, 2025

Open Issues Need Help

View All on GitHub
Nothing is generated about 1 month ago

AI Summary: A user is attempting to run a custom OpenVINO model on an NPU device but receives no output when making a completion request. The logs indicate that no generation, tokenization, or other processing occurs, with all timing metrics showing 0.00, suggesting a silent failure in the inference pipeline before any actual computation takes place.

Complexity: 3/5
good first issue

Repository for OpenVINO's extra modules

C++
#arm#inference-engine#java#nvidia-gpu#openvino#pytorch

AI Summary: A user is encountering an issue where `ollama.exe serve` produces no output when run in PowerShell on Windows, even after setting `GODEBUG=cgocheck=0` and trying `setupvars.bat`. The problem requires investigation to diagnose why the command is not providing any feedback or starting as expected.

Complexity: 2/5
good first issue

Repository for OpenVINO's extra modules

C++
#arm#inference-engine#java#nvidia-gpu#openvino#pytorch
good first issue

Repository for OpenVINO's extra modules

C++
#arm#inference-engine#java#nvidia-gpu#openvino#pytorch