ministral-3-14b-gguf
ministral-3-14b-gguf is a GGUF Q4_K_M int4 quantized version of ministral-3-14b-instruct-gguf, providing a fast inference implementation, optimized for AI PCs.
ministral-3-14b-gguf is the newest open source 7b instruct release from Mistral.
Model Description
- Developed by: mistralai
- Model type: ministral-3-14b-gguf
- Parameters: 14 billion
- Model Parent: mistralai/Ministral-3-14B-Instruct-2512
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: General use
- Quantization: int4
Model Card Contact
- Downloads last month
- 658
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for llmware/ministral-3-14b-gguf
Base model
mistralai/Ministral-3-14B-Base-2512
Quantized
mistralai/Ministral-3-14B-Instruct-2512