-
-
-
-
-
-
Inference Providers
Active filters:
thinking
Image-Text-to-Text
•
4B
•
Updated
•
670
•
12
Triangle104/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-Q4_K_S-GGUF
8B
•
Updated
•
5
•
1
Triangle104/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-Q4_K_M-GGUF
8B
•
Updated
•
11
Triangle104/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-Q5_K_S-GGUF
8B
•
Updated
•
7
Triangle104/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-Q5_K_M-GGUF
8B
•
Updated
•
13
Triangle104/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-Q6_K-GGUF
8B
•
Updated
•
15
Triangle104/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-Q8_0-GGUF
8B
•
Updated
•
11
DavidAU/Qwen3-4B-NEO-Imatrix-Max-GGUF
Text Generation
•
4B
•
Updated
•
590
•
7
DavidAU/Qwen3-14B-HORROR-Imatrix-Max-GGUF
Text Generation
•
15B
•
Updated
•
128
•
4
DavidAU/Qwen3-4B-Mishima-Imatrix-GGUF
Text Generation
•
4B
•
Updated
•
16
•
3
DavidAU/Qwen3-32B-128k-NEO-Imatrix-Max-GGUF
Text Generation
•
33B
•
Updated
•
364
•
5
DavidAU/Qwen3-30B-A1.5B-High-Speed
Text Generation
•
31B
•
Updated
•
35
•
11
ertghiu256/qwen3-4b-code-reasoning
Text Generation
•
Updated
•
11
•
6
Disya/Qwen3-30B-A1.5B-High-Speed-Q4_K_M-GGUF
Text Generation
•
31B
•
Updated
•
51
DavidAU/Qwen3-30B-A6B-16-Extreme
Text Generation
•
31B
•
Updated
•
2
•
59
ThijsL202/Qwen3-30B-A7.5B-24-Grand-Brainstorm-Q8_0-GGUF
Text Generation
•
31B
•
Updated
•
2
mradermacher/Qwen3-30B-A7.5B-24-Grand-Brainstorm-GGUF
31B
•
Updated
•
95
•
4
mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF
31B
•
Updated
•
259
•
27
mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF
31B
•
Updated
•
261
•
3
mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF
31B
•
Updated
•
48
•
1
ertghiu256/qwen3-4b-code-reasoning-gguf
Text Generation
•
4B
•
Updated
•
125
•
1
DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored
Text Generation
•
8B
•
Updated
•
103
•
11
DavidAU/Qwen3-8B-64k-Josiefied-Uncensored-HORROR-Max-GGUF
Text Generation
•
8B
•
Updated
•
186
•
7
mradermacher/Qwen3-8B-320k-Context-10X-Massive-GGUF
8B
•
Updated
•
19
mradermacher/Qwen3-8B-320k-Context-10X-Massive-i1-GGUF
8B
•
Updated
•
56
mradermacher/Qwen3-8B-256k-Context-8X-Grand-GGUF
8B
•
Updated
•
45
mradermacher/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored-GGUF
8B
•
Updated
•
159
•
1
mradermacher/Qwen3-8B-96k-Context-3X-Medium-Plus-GGUF
8B
•
Updated
•
34
mradermacher/Qwen3-8B-192k-Context-6X-Larger-GGUF
8B
•
Updated
•
45
mradermacher/Qwen3-8B-64k-Context-2X-Medium-GGUF
8B
•
Updated
•
110