Open Issues Need Help
View All on GitHubAI Summary: The user is encountering a `RuntimeError` stating "Unknown Model (mobilenetv5_300m_enc)" when attempting to load a "Gemma 3n" model using the latest `transformers` library (v4.53.0). This indicates a significant mismatch where the system is trying to load an unexpected image model instead of the intended text-based Gemma model, leading the user to question if Gemma 3n requires special, unsustainable configurations.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
AI Summary: The user requests the addition of DINOv3 to AutoBackbone, noting that DINOv2 is already included. They suggest DINOv3 could directly inherit from DINOv2 for ease of implementation and user convenience.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
AI Summary: The task is to investigate and resolve a performance issue in the Hugging Face Transformers library. The text generation pipeline is significantly slower when a large list of `bad_words_ids` is used. The solution requires profiling the code to identify the bottleneck (inefficient looping, tensor access, or slow regex) and optimizing the relevant section to improve performance.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.