Text Generation
Transformers
GGUF
Safetensors
PyTorch
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
Safetensors
text-generation-inference
Merge
7b
mistralai/Mistral-7B-Instruct-v0.1
NousResearch/Yarn-Mistral-7b-64k
custom_code
en
emozilla/yarn-train-tokenized-16k-mistral
arxiv:2309.00071
conversational
Yarn-Mistral-7b-64k-Mistral-7B-Instruct-v0.1-GGUF
/
Yarn-Mistral-7b-64k-Mistral-7B-Instruct-v0.1.Q4_K_M.gguf
- SHA256:
- a854469f1a1f54338ad90a4cb6fa8fe6a5c98c1db99e37f07b9357ed1109dc07
- Pointer size:
- 135 Bytes
- Size of remote file:
- 4.37 GB
- Xet hash:
- 2e42c1ab40481120a2afa01772f7e6ef6e4cab67ec63c81b180e150f0ee000db
·
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.