OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview Please!
Hi mradermacher,
Could you please help quantize the model OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview into GGUF format?
Thanks a lot for your amazing work!
It's queued.
@mradermacher
InternVLChatModel models are still not getting treated as vision model. This is starting to get an issue as the longer we wait to fix that the more models we will have to redo for MMPROJ. I will also start forgetting which one to requeueso this really needs to be addressed soon. Luckily ion this case something with the vision seams to be broken anyways and so luckily doesn't matter that much: tttps://huggingface.co/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview/discussions/3
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#InternVL3_5-GPT-OSS-20B-A4B-Preview-GGUF for quants to appear.
@mradermacher InternVLChatModel models are still not getting treated as vision model.
If there is a llama.cpp update I need to compile, pray tell. Otherwise, what can I do about it? I'm not in the business of teaching llama.cpp vision models :)
Unfortunately, mmproj extraction fails with this model, so it's not supported by current llama.cpp (although InternVLChatModel generally seems to work with other such models).
ValueError: Can not map tensor 'language_model.model.layers.0.input_layernorm.weight'
Uhmuhm, it does seem to have worked on the end, and the mmproj files should be up.