🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

audio deep-learning deepseek gemma glm hacktoberfest llm machine-learning model-hub natural-language-processing nlp pretrained-models python pytorch pytorch-transformers qwen speech-recognition transformer vlm
9 Open Issues Need Help Last updated: Sep 4, 2025

Open Issues Need Help

View All on GitHub

AI Summary: The user is encountering a `RuntimeError` stating "Unknown Model (mobilenetv5_300m_enc)" when attempting to load a "Gemma 3n" model using the latest `transformers` library (v4.53.0). This indicates a significant mismatch where the system is trying to load an unexpected image model instead of the intended text-based Gemma model, leading the user to question if Gemma 3n requires special, unsustainable configurations.

Complexity: 2/5
Good First Issue bug

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python
#audio#deep-learning#deepseek#gemma#glm#hacktoberfest#llm#machine-learning#model-hub#natural-language-processing#nlp#pretrained-models#python#pytorch#pytorch-transformers#qwen#speech-recognition#transformer#vlm

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python
#audio#deep-learning#deepseek#gemma#glm#hacktoberfest#llm#machine-learning#model-hub#natural-language-processing#nlp#pretrained-models#python#pytorch#pytorch-transformers#qwen#speech-recognition#transformer#vlm
Good First Issue Good First Documentation Issue contributions-welcome

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python
#audio#deep-learning#deepseek#gemma#glm#hacktoberfest#llm#machine-learning#model-hub#natural-language-processing#nlp#pretrained-models#python#pytorch#pytorch-transformers#qwen#speech-recognition#transformer#vlm
Good First Issue Good Second Issue Vision contributions-welcome Processing

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python
#audio#deep-learning#deepseek#gemma#glm#hacktoberfest#llm#machine-learning#model-hub#natural-language-processing#nlp#pretrained-models#python#pytorch#pytorch-transformers#qwen#speech-recognition#transformer#vlm

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python
#audio#deep-learning#deepseek#gemma#glm#hacktoberfest#llm#machine-learning#model-hub#natural-language-processing#nlp#pretrained-models#python#pytorch#pytorch-transformers#qwen#speech-recognition#transformer#vlm

AI Summary: The user requests the addition of DINOv3 to AutoBackbone, noting that DINOv2 is already included. They suggest DINOv3 could directly inherit from DINOv2 for ease of implementation and user convenience.

Complexity: 2/5
Good First Issue Feature request

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python
#audio#deep-learning#deepseek#gemma#glm#hacktoberfest#llm#machine-learning#model-hub#natural-language-processing#nlp#pretrained-models#python#pytorch#pytorch-transformers#qwen#speech-recognition#transformer#vlm

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python
#audio#deep-learning#deepseek#gemma#glm#hacktoberfest#llm#machine-learning#model-hub#natural-language-processing#nlp#pretrained-models#python#pytorch#pytorch-transformers#qwen#speech-recognition#transformer#vlm

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python
#audio#deep-learning#deepseek#gemma#glm#hacktoberfest#llm#machine-learning#model-hub#natural-language-processing#nlp#pretrained-models#python#pytorch#pytorch-transformers#qwen#speech-recognition#transformer#vlm

AI Summary: The task is to investigate and resolve a performance issue in the Hugging Face Transformers library. The text generation pipeline is significantly slower when a large list of `bad_words_ids` is used. The solution requires profiling the code to identify the bottleneck (inefficient looping, tensor access, or slow regex) and optimizing the relevant section to improve performance.

Complexity: 4/5
Good First Issue bug mps

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python
#audio#deep-learning#deepseek#gemma#glm#hacktoberfest#llm#machine-learning#model-hub#natural-language-processing#nlp#pretrained-models#python#pytorch#pytorch-transformers#qwen#speech-recognition#transformer#vlm