Cerebras REAP Collection Sparse MoE models compressed using REAP (Router-weighted Expert Activation Pruning) method β’ 19 items β’ Updated 1 day ago β’ 52
mmBERT: a modern multilingual encoder Collection mmBERT is trained on 3T tokens from over 1800 languages, showing SoTA scores on benchmarks and exceptional low-resource performance β’ 16 items β’ Updated Sep 9 β’ 49
Llama 4 Collection Meta's new Llama 4 multimodal models, Scout & Maverick. Includes Dynamic GGUFs, 16-bit & Dynamic 4-bit uploads. Run & fine-tune them with Unsloth! β’ 15 items β’ Updated 8 days ago β’ 53
π§ Reasoning datasets Collection Datasets with reasoning traces for math and code released by the community β’ 24 items β’ Updated May 19 β’ 175
view article Article Improving Hugging Face Training Efficiency Through Packing with Flash Attention 2 +4 Aug 21, 2024 β’ 42
Qwen2.5 Collection Qwen2.5 language models, including pretrained and instruction-tuned models of 7 sizes, including 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B. β’ 46 items β’ Updated Jul 21 β’ 667
The Big Benchmarks Collection Collection Gathering benchmark spaces on the hub (beyond the Open LLM Leaderboard) β’ 13 items β’ Updated Nov 18, 2024 β’ 253
Standard-format-preference-dataset Collection We collect the open-source datasets and process them into the standard format. β’ 14 items β’ Updated May 8, 2024 β’ 26
FP8 LLMs for vLLM Collection Accurate FP8 quantized models by Neural Magic, ready for use with vLLM! β’ 44 items β’ Updated Oct 17, 2024 β’ 76
Korean Datasets I've released so far. Collection μ§κΈκΉμ§ μ λ‘λν νκ΅μ΄ λ°μ΄ν°μ μ½λ μ μ λλ€. β’ 8 items β’ Updated May 24, 2024 β’ 21