Fara-7B-AIO-GGUF
Fara-7B is Microsoft's first agentic small language model (SLM) with 7 billion parameters, built on Qwen2.5-VL-7B as a multimodal decoder-only architecture specialized for computer use tasks like web automation, shopping, booking travel, restaurant reservations, and account workflows. It processes user goals, browser screenshots, and action history within a 128k token context to generate chain-of-thought reasoning followed by grounded tool calls for actions such as mouse movements, clicks, typing, scrolling, URL visits, and web searches, mimicking human-like desktop interactions without accessibility trees. Trained in just 2.5 days on 64 H100 GPUs using 145k synthetic trajectories from a multi-agent pipeline, it achieves state-of-the-art results in its size class—73.5% on WebVoyager, 38.4% on WebTailBench—outperforming peers like UI-TARS-7B while incorporating safety safeguards to halt at critical points (e.g., purchases, personal info entry) and refuse harmful tasks.[1][2][3][4]
Fara-7B [GGUF]
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| Fara-7B.BF16.gguf | BF16 | 15.2 GB | Download |
| Fara-7B.F16.gguf | F16 | 15.2 GB | Download |
| Fara-7B.F32.gguf | F32 | 30.5 GB | Download |
| Fara-7B.IQ4_XS.gguf | IQ4_XS | 4.25 GB | Download |
| Fara-7B.Q2_K.gguf | Q2_K | 3.02 GB | Download |
| Fara-7B.Q3_K_L.gguf | Q3_K_L | 4.09 GB | Download |
| Fara-7B.Q3_K_M.gguf | Q3_K_M | 3.81 GB | Download |
| Fara-7B.Q3_K_S.gguf | Q3_K_S | 3.49 GB | Download |
| Fara-7B.Q4_K_M.gguf | Q4_K_M | 4.68 GB | Download |
| Fara-7B.Q4_K_S.gguf | Q4_K_S | 4.46 GB | Download |
| Fara-7B.Q5_K_M.gguf | Q5_K_M | 5.44 GB | Download |
| Fara-7B.Q5_K_S.gguf | Q5_K_S | 5.32 GB | Download |
| Fara-7B.Q6_K.gguf | Q6_K | 6.25 GB | Download |
| Fara-7B.Q8_0.gguf | Q8_0 | 8.1 GB | Download |
| Fara-7B.i1-IQ1_M.gguf | i1-IQ1_M | 2.04 GB | Download |
| Fara-7B.i1-IQ1_S.gguf | i1-IQ1_S | 1.9 GB | Download |
| Fara-7B.i1-IQ2_M.gguf | i1-IQ2_M | 2.78 GB | Download |
| Fara-7B.i1-IQ2_S.gguf | i1-IQ2_S | 2.6 GB | Download |
| Fara-7B.i1-IQ2_XS.gguf | i1-IQ2_XS | 2.47 GB | Download |
| Fara-7B.i1-IQ2_XXS.gguf | i1-IQ2_XXS | 2.27 GB | Download |
| Fara-7B.i1-IQ3_M.gguf | i1-IQ3_M | 3.57 GB | Download |
| Fara-7B.i1-IQ3_S.gguf | i1-IQ3_S | 3.5 GB | Download |
| Fara-7B.i1-IQ3_XS.gguf | i1-IQ3_XS | 3.35 GB | Download |
| Fara-7B.i1-IQ3_XXS.gguf | i1-IQ3_XXS | 3.11 GB | Download |
| Fara-7B.i1-IQ4_NL.gguf | i1-IQ4_NL | 4.44 GB | Download |
| Fara-7B.i1-IQ4_XS.gguf | i1-IQ4_XS | 4.22 GB | Download |
| Fara-7B.i1-Q2_K.gguf | i1-Q2_K | 3.02 GB | Download |
| Fara-7B.i1-Q2_K_S.gguf | i1-Q2_K_S | 2.83 GB | Download |
| Fara-7B.i1-Q3_K_L.gguf | i1-Q3_K_L | 4.09 GB | Download |
| Fara-7B.i1-Q3_K_M.gguf | i1-Q3_K_M | 3.81 GB | Download |
| Fara-7B.i1-Q3_K_S.gguf | i1-Q3_K_S | 3.49 GB | Download |
| Fara-7B.i1-Q4_0.gguf | i1-Q4_0 | 4.44 GB | Download |
| Fara-7B.i1-Q4_1.gguf | i1-Q4_1 | 4.87 GB | Download |
| Fara-7B.i1-Q4_K_M.gguf | i1-Q4_K_M | 4.68 GB | Download |
| Fara-7B.i1-Q4_K_S.gguf | i1-Q4_K_S | 4.46 GB | Download |
| Fara-7B.i1-Q5_K_M.gguf | i1-Q5_K_M | 5.44 GB | Download |
| Fara-7B.i1-Q5_K_S.gguf | i1-Q5_K_S | 5.32 GB | Download |
| Fara-7B.i1-Q6_K.gguf | i1-Q6_K | 6.25 GB | Download |
| Fara-7B.imatrix.gguf | imatrix | 4.56 MB | Download |
| Fara-7B.mmproj-bf16.gguf | mmproj-bf16 | 1.36 GB | Download |
| Fara-7B.mmproj-f16.gguf | mmproj-f16 | 1.35 GB | Download |
| Fara-7B.mmproj-f32.gguf | mmproj-f32 | 2.71 GB | Download |
| Fara-7B.mmproj-q8_0.gguf | mmproj-q8_0 | 856 MB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 4,892
Model tree for prithivMLmods/Fara-7B-AIO-GGUF
Base model
microsoft/Fara-7B