danielhanchen commited on
Commit
d2e04f2
·
verified ·
1 Parent(s): 766d784

Add files using upload-large-folder tool

Browse files
Files changed (50) hide show
  1. .gitattributes +2 -0
  2. README.md +398 -0
  3. consolidated-00001-of-00272.safetensors +3 -0
  4. consolidated-00026-of-00272.safetensors +3 -0
  5. consolidated-00039-of-00272.safetensors +3 -0
  6. consolidated-00049-of-00272.safetensors +3 -0
  7. consolidated-00055-of-00272.safetensors +3 -0
  8. consolidated-00060-of-00272.safetensors +3 -0
  9. consolidated-00063-of-00272.safetensors +3 -0
  10. consolidated-00065-of-00272.safetensors +3 -0
  11. consolidated-00067-of-00272.safetensors +3 -0
  12. consolidated-00070-of-00272.safetensors +3 -0
  13. consolidated-00071-of-00272.safetensors +3 -0
  14. consolidated-00073-of-00272.safetensors +3 -0
  15. consolidated-00074-of-00272.safetensors +3 -0
  16. consolidated-00085-of-00272.safetensors +3 -0
  17. consolidated-00087-of-00272.safetensors +3 -0
  18. consolidated-00088-of-00272.safetensors +3 -0
  19. consolidated-00092-of-00272.safetensors +3 -0
  20. consolidated-00093-of-00272.safetensors +3 -0
  21. consolidated-00095-of-00272.safetensors +3 -0
  22. consolidated-00104-of-00272.safetensors +3 -0
  23. consolidated-00116-of-00272.safetensors +3 -0
  24. consolidated-00120-of-00272.safetensors +3 -0
  25. consolidated-00121-of-00272.safetensors +3 -0
  26. consolidated-00124-of-00272.safetensors +3 -0
  27. consolidated-00129-of-00272.safetensors +3 -0
  28. consolidated-00130-of-00272.safetensors +3 -0
  29. consolidated-00135-of-00272.safetensors +3 -0
  30. consolidated-00138-of-00272.safetensors +3 -0
  31. consolidated-00142-of-00272.safetensors +3 -0
  32. consolidated-00143-of-00272.safetensors +3 -0
  33. consolidated-00146-of-00272.safetensors +3 -0
  34. consolidated-00168-of-00272.safetensors +3 -0
  35. consolidated-00173-of-00272.safetensors +3 -0
  36. consolidated-00179-of-00272.safetensors +3 -0
  37. consolidated-00183-of-00272.safetensors +3 -0
  38. consolidated-00185-of-00272.safetensors +3 -0
  39. consolidated-00191-of-00272.safetensors +3 -0
  40. consolidated-00194-of-00272.safetensors +3 -0
  41. consolidated-00198-of-00272.safetensors +3 -0
  42. consolidated-00202-of-00272.safetensors +3 -0
  43. consolidated-00205-of-00272.safetensors +3 -0
  44. consolidated.safetensors.index.json +0 -0
  45. params.json +61 -0
  46. processor_config.json +42 -0
  47. special_tokens_map.json +0 -0
  48. tekken.json +3 -0
  49. tokenizer.json +3 -0
  50. tokenizer_config.json +0 -0
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tekken.json filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: vllm
3
+ language:
4
+ - en
5
+ - fr
6
+ - es
7
+ - de
8
+ - it
9
+ - pt
10
+ - nl
11
+ - zh
12
+ - ja
13
+ - ko
14
+ - ar
15
+ license: apache-2.0
16
+ inference: false
17
+ extra_gated_description: >-
18
+ If you want to learn more about how we process your personal data, please read
19
+ our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
20
+ tags:
21
+ - mistral-common
22
+ ---
23
+
24
+ # Mistral Large 3 675B Base 2512
25
+ From our family of large models, **Mistral Large 3** is a state-of-the-art general-purpose **Multimodal granular Mixture-of-Experts** model with **41B active parameters** and **675B total parameters** trained from scratch with 3000 H200s.
26
+
27
+ This model is the base pre-trained version, not fine-tuned for instruction or reasoning tasks, making it ideal for custom post-training processes.
28
+ Designed for reliability and long-context comprehension - It is engineered for production-grade assistants, retrieval-augmented systems, scientific workloads, and complex enterprise workflows.
29
+
30
+ Mistral Large 3 Instruct is deployable on-premises in:
31
+ - [FP8](https://huggingface.co/mistralai/Mistral-Large-3-675B-Instruct-2512) on a single node of B200s or H200s.
32
+ - [NVFP4](https://huggingface.co/mistralai/Mistral-Large-3-675B-Instruct-2512-NVFP4) on a single node of H100s or A100s.
33
+
34
+ ## Key Features
35
+ Mistral Large 3 consists of two main architectural components:
36
+ - **A Granular MoE Language Model with 673B params and 39B active**
37
+ - **A 2.5B Vision Encoder**
38
+
39
+ The Mistral Large 3 Base model offers the following capabilities:
40
+ - **Vision**: Enables the model to analyze images and provide insights based on visual content, in addition to text.
41
+ - **Multilingual**: Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
42
+ - **Frontier**: Delivers best-in-class performance.
43
+ - **Apache 2.0 License**: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
44
+ - **Large Context Window**: Supports a 256k context window.
45
+
46
+ ## Use Cases
47
+ With powerful long-context performance, stable and consistent cross-domain behavior, Mistral Large 3 is perfect for:
48
+ - Long Document Understanding
49
+ - Powerful Daily-Driver AI Assistants
50
+ - State-of-the-Art Agentic and Tool-Use Capabilities
51
+ - Enterprise Knowledge Work
52
+ - General Coding Assistant
53
+
54
+ And enterprise-grade use cases requiring frontier capabilities.
55
+
56
+ ## Recommended Settings
57
+
58
+ We recommend deploying Large 3 in a client-server configuration with the following best practices:
59
+
60
+ - **System Prompt**: Define a clear environment and use case, including guidance on how to effectively leverage tools in agentic systems.
61
+ - **Sampling Parameters**: Use a temperature below 0.1 for daily-driver and production environments ; Higher temperatures may be explored for creative use cases - developers are encouraged to experiment with alternative settings.
62
+ - **Tools**: Keep the set of tools well-defined and limit their number to the minimum required for the use case - Avoiding overloading the model with an excessive number of tools.
63
+ - **Vision**: When deploying with vision capabilities, we recommend maintaining an aspect ratio close to 1:1 (width-to-height) for images. Avoiding the use of overly thin or wide images - crop them as needed to ensure optimal performance.
64
+
65
+ ### Known Issues / Limitations
66
+
67
+ - **Not a dedicated reasoning model**: Dedicated reasoning models can outperform Mistral Large 3 in strict reasoning use cases.
68
+ - **Behind vision-first models in multimodal tasks**: Mistral Large 3 can lag behind models optimized for vision tasks and use cases.
69
+ - **Complex deployment**: Due to its large size and architecture, the model can be challenging to deploy efficiently with constrained resources or at scale.
70
+
71
+ ## Benchmark Results
72
+
73
+ We compare Mistral Large 3 to similar sized models.
74
+
75
+ ### Text
76
+
77
+ ### Vision
78
+
79
+ ## Instruct Usage
80
+
81
+ The Instruct model can be used with the following frameworks;
82
+ - [`vllm`](https://github.com/vllm-project/vllm): See [here](#vllm)
83
+
84
+ ### vLLM
85
+
86
+ We recommend using this model with [vLLM](https://github.com/vllm-project/vllm).
87
+
88
+ #### Installation
89
+
90
+ Make sure to install [`vLLM >= 0.12.0`](https://github.com/vllm-project/vllm/releases/tag/v0.12.0):
91
+
92
+ ```
93
+ pip install vllm --upgrade
94
+ ```
95
+
96
+ Doing so should automatically install [`mistral_common >= 1.8.6`](https://github.com/mistralai/mistral-common/releases/tag/v1.8.6).
97
+
98
+ To check:
99
+ ```
100
+ python -c "import mistral_common; print(mistral_common.__version__)"
101
+ ```
102
+
103
+ You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
104
+
105
+ #### Serve
106
+
107
+ The Mistral Large 3 Instruct FP8 format can be used on one 8xH200 node. We recommend to use this format if you plan to fine-tuning as it can be more precise than NVFP4 in some situations.
108
+
109
+ A simple launch command is:
110
+
111
+ ```bash
112
+
113
+ vllm serve mistralai/Mistral-Large-3-675B-Instruct-2512 \
114
+ --tensor-parallel-size 8 \
115
+ --enable-auto-tool-choice --tool-call-parser mistral
116
+ ```
117
+
118
+ Key parameter notes:
119
+
120
+ * enable-auto-tool-choice: Required when enabling tool usage.
121
+ * tool-call-parser mistral: Required when enabling tool usage.
122
+
123
+
124
+ Additional flags:
125
+
126
+ * You can set `--max-model-len` to preserve memory. By default it is set to `262144` which is quite large but not necessary for most scenarios.
127
+ * You can set `--max-num-batched-tokens` to balance throughput and latency, higher means higher throughput but higher latency.
128
+
129
+ #### Usage of the model
130
+
131
+ Here we asumme that the model `mistralai/Mistral-Large-3-675B-Instruct-2512` is served and you can ping it to the domain `localhost` with the port `8000` which is the default for vLLM.
132
+
133
+ <details>
134
+ <summary>Vision Reasoning</summary>
135
+
136
+ Let's see if Mistral Large 3 knows when to pick a fight !
137
+
138
+ ```python
139
+ from datetime import datetime, timedelta
140
+
141
+ from openai import OpenAI
142
+ from huggingface_hub import hf_hub_download
143
+
144
+ # Modify OpenAI's API key and API base to use vLLM's API server.
145
+ openai_api_key = "EMPTY"
146
+ openai_api_base = "http://localhost:8000/v1"
147
+
148
+ TEMP = 0.15
149
+ MAX_TOK = 262144
150
+
151
+ client = OpenAI(
152
+ api_key=openai_api_key,
153
+ base_url=openai_api_base,
154
+ )
155
+
156
+ models = client.models.list()
157
+ model = models.data[0].id
158
+
159
+
160
+ def load_system_prompt(repo_id: str, filename: str) -> str:
161
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
162
+ with open(file_path, "r") as file:
163
+ system_prompt = file.read()
164
+ today = datetime.today().strftime("%Y-%m-%d")
165
+ yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
166
+ model_name = repo_id.split("/")[-1]
167
+ return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
168
+
169
+
170
+ SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
171
+ image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
172
+
173
+ messages = [
174
+ {"role": "system", "content": SYSTEM_PROMPT},
175
+ {
176
+ "role": "user",
177
+ "content": [
178
+ {
179
+ "type": "text",
180
+ "text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
181
+ },
182
+ {"type": "image_url", "image_url": {"url": image_url}},
183
+ ],
184
+ },
185
+ ]
186
+
187
+
188
+ response = client.chat.completions.create(
189
+ model=model,
190
+ messages=messages,
191
+ temperature=TEMP,
192
+ max_tokens=MAX_TOK,
193
+ )
194
+
195
+ print(response.choices[0].message.content)
196
+ ```
197
+ </details>
198
+
199
+ <details>
200
+ <summary>Function Calling</summary>
201
+
202
+ Let's solve some equations thanks to our simple Python calculator tool.
203
+
204
+ ```python
205
+ import json
206
+ from openai import OpenAI
207
+ from huggingface_hub import hf_hub_download
208
+
209
+ # Modify OpenAI's API key and API base to use vLLM's API server.
210
+ openai_api_key = "EMPTY"
211
+ openai_api_base = "http://localhost:8000/v1"
212
+
213
+ TEMP = 0.15
214
+ MAX_TOK = 262144
215
+
216
+ client = OpenAI(
217
+ api_key=openai_api_key,
218
+ base_url=openai_api_base,
219
+ )
220
+
221
+ models = client.models.list()
222
+ model = models.data[0].id
223
+
224
+
225
+ def load_system_prompt(repo_id: str, filename: str) -> str:
226
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
227
+ with open(file_path, "r") as file:
228
+ system_prompt = file.read()
229
+ return system_prompt
230
+
231
+
232
+ SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
233
+
234
+ image_url = "https://math-coaching.com/img/fiche/46/expressions-mathematiques.jpg"
235
+
236
+
237
+ def my_calculator(expression: str) -> str:
238
+ return str(eval(expression))
239
+
240
+
241
+ tools = [
242
+ {
243
+ "type": "function",
244
+ "function": {
245
+ "name": "my_calculator",
246
+ "description": "A calculator that can evaluate a mathematical equation and compute its results.",
247
+ "parameters": {
248
+ "type": "object",
249
+ "properties": {
250
+ "expression": {
251
+ "type": "string",
252
+ "description": "The mathematical expression to evaluate.",
253
+ },
254
+ },
255
+ "required": ["expression"],
256
+ },
257
+ },
258
+ },
259
+ {
260
+ "type": "function",
261
+ "function": {
262
+ "name": "rewrite",
263
+ "description": "Rewrite a given text for improved clarity",
264
+ "parameters": {
265
+ "type": "object",
266
+ "properties": {
267
+ "text": {
268
+ "type": "string",
269
+ "description": "The input text to rewrite",
270
+ }
271
+ },
272
+ },
273
+ },
274
+ },
275
+ ]
276
+
277
+ messages = [
278
+ {"role": "system", "content": SYSTEM_PROMPT},
279
+ {
280
+ "role": "user",
281
+ "content": [
282
+ {
283
+ "type": "text",
284
+ "text": "Thanks to your calculator, compute the results for the equations that involve numbers displayed in the image.",
285
+ },
286
+ {
287
+ "type": "image_url",
288
+ "image_url": {
289
+ "url": image_url,
290
+ },
291
+ },
292
+ ],
293
+ },
294
+ ]
295
+
296
+ response = client.chat.completions.create(
297
+ model=model,
298
+ messages=messages,
299
+ temperature=TEMP,
300
+ max_tokens=MAX_TOK,
301
+ tools=tools,
302
+ tool_choice="auto",
303
+ )
304
+
305
+ tool_calls = response.choices[0].message.tool_calls
306
+
307
+ results = []
308
+ for tool_call in tool_calls:
309
+ function_name = tool_call.function.name
310
+ function_args = tool_call.function.arguments
311
+ if function_name == "my_calculator":
312
+ result = my_calculator(**json.loads(function_args))
313
+ results.append(result)
314
+
315
+ messages.append({"role": "assistant", "tool_calls": tool_calls})
316
+ for tool_call, result in zip(tool_calls, results):
317
+ messages.append(
318
+ {
319
+ "role": "tool",
320
+ "tool_call_id": tool_call.id,
321
+ "name": tool_call.function.name,
322
+ "content": result,
323
+ }
324
+ )
325
+
326
+
327
+ response = client.chat.completions.create(
328
+ model=model,
329
+ messages=messages,
330
+ temperature=TEMP,
331
+ max_tokens=MAX_TOK,
332
+ )
333
+
334
+ print(response.choices[0].message.content)
335
+ ```
336
+
337
+ </details>
338
+
339
+ <details>
340
+ <summary>Text-Only Request</summary>
341
+
342
+ Mistral Large 3 can follow your instructions down to the letter.
343
+
344
+ ```python
345
+ from openai import OpenAI
346
+ from huggingface_hub import hf_hub_download
347
+
348
+ # Modify OpenAI's API key and API base to use vLLM's API server.
349
+ openai_api_key = "EMPTY"
350
+ openai_api_base = "http://localhost:8000/v1"
351
+
352
+ TEMP = 0.15
353
+ MAX_TOK = 262144
354
+
355
+ client = OpenAI(
356
+ api_key=openai_api_key,
357
+ base_url=openai_api_base,
358
+ )
359
+
360
+ models = client.models.list()
361
+ model = models.data[0].id
362
+
363
+
364
+ def load_system_prompt(repo_id: str, filename: str) -> str:
365
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
366
+ with open(file_path, "r") as file:
367
+ system_prompt = file.read()
368
+ return system_prompt
369
+
370
+
371
+ SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
372
+
373
+ messages = [
374
+ {"role": "system", "content": SYSTEM_PROMPT},
375
+ {
376
+ "role": "user",
377
+ "content": "Write me a sentence where every word starts with the next letter in the alphabet - start with 'a' and end with 'z'.",
378
+ },
379
+ ]
380
+
381
+ response = client.chat.completions.create(
382
+ model=model,
383
+ messages=messages,
384
+ temperature=TEMP,
385
+ max_tokens=MAX_TOK,
386
+ )
387
+
388
+ assistant_message = response.choices[0].message.content
389
+ print(assistant_message)
390
+ ```
391
+
392
+ </details>
393
+
394
+ ## License
395
+
396
+ This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.txt).
397
+
398
+ *You must not use this model in a manner that infringes, misappropriates, or otherwise violates any third party’s rights, including intellectual property rights.*
consolidated-00001-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45bb281671a3022404fe8418853526244e5b474d0163f12c069a426f6590de12
3
+ size 4998260600
consolidated-00026-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3461b8fb74fc3e9a8330d95f72781d825a997a4d63a0ccfd162a6b706418d10
3
+ size 4991230984
consolidated-00039-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82a885654b0e883536da58efe83f777ad68039bfff7a6c789af4d6ed26ac4859
3
+ size 4991231056
consolidated-00049-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2cbfbe2636e6c79f162d8bc4c1fbce78930d524cde57e75738939e91093d1e4
3
+ size 4991231008
consolidated-00055-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3585c918b65ac837e531ad2cbcdd7439831b9f9f68a43a9c0121e2bf30cfd62
3
+ size 4991230984
consolidated-00060-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:801acc32e87b38aac7d33f220cf7d5f516340c13c9187654fc918868b7a55a9e
3
+ size 4991230984
consolidated-00063-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8a708211d870050f147839444670e7eb201562dd2edba994b233f98e4b9ae40
3
+ size 4991231008
consolidated-00065-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe9fc37799958f048726a67417d775cc707491cd288a47adfa5b48fff2356e22
3
+ size 4991230984
consolidated-00067-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:864bdbb30eccd5f16de465b06cc86a1d8d5f9ff29ec782f57c58e2050da8ccfb
3
+ size 4991231064
consolidated-00070-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cc49d71f8a94841e3588f84408c4a9854de7c1907244dbc9306ae787d802aee
3
+ size 4991230984
consolidated-00071-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db46e1e666edae0a56d4819f2ecf0b6e0fb14a647e22685b2f5668e1da7c18c4
3
+ size 4956267904
consolidated-00073-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b04e9ba1cc78cdff353ad31d7c3bbfd9067d91bf557cc47cbb5223cfe8556d4f
3
+ size 4991230984
consolidated-00074-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24a2a56e68eeba5b7df92a905c887b5e6949b4f47c056b5d4771ab1a7b2086d2
3
+ size 4991230984
consolidated-00085-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1f6bb9ba5e8245bd582c30d322a28f80c342aad7c67622be10b1c071d747e5e
3
+ size 4956267904
consolidated-00087-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:410bea1b852f7f9f54b97dd7116eaed86dcc5dc6392b7e4308b196988dc29b5d
3
+ size 4991230984
consolidated-00088-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a51dbae2141b8461f3f7569304cc8f5e4649daab2a37365861bd79a3978e48a4
3
+ size 4991230984
consolidated-00092-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b762f25569bc9b559ed6adf52af69758c8feddc6719e499bed888d9dcb94d8f3
3
+ size 4991230984
consolidated-00093-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bad5c47dd59c9693867c9ff4a1688446011e11698175ae667f16c042e089ffc3
3
+ size 4991230984
consolidated-00095-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39876f982945ca39dc32fb667de480dfc949e4ba05c439302def967ebd9c8f62
3
+ size 4991230960
consolidated-00104-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ba5e4546672eaa212486c25cb0d7cb65f290cd5af668f45ab75c5eb2f146f8b
3
+ size 4991231064
consolidated-00116-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05ecdfe6c40132a4cf137087d977617696c053d3033cedb996eabca0a36c8b5f
3
+ size 4991230984
consolidated-00120-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e912a0266c90f0664c9541ab9be48064bea6f8ce68f3e6bb096bad78781c2e4b
3
+ size 4991230984
consolidated-00121-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9984d5e37e7a61e62612194fd6fd6e73a39f3d23c00209fce66671f43a6ed46
3
+ size 4991230984
consolidated-00124-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:234ff23d50be49b3a1b82900d8d94cb3343cff1249f800890ad7d14ad0671192
3
+ size 4991230984
consolidated-00129-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2089049e913bf24382cfa4fa69635044b4b09623e506376eab1b3c6c94b0e2bf
3
+ size 4991230984
consolidated-00130-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dabff442b93fc22eb7f2638bda42781cb9abea0081c24848b241b7c5b7c701d
3
+ size 4991230984
consolidated-00135-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10ceac4d8d7d03ecadad8622f917fb38cad050effe681c758a1f9d9bf1b08377
3
+ size 4991230984
consolidated-00138-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef59ca3c783b6130df4f7b98d4ca627e8fd5f05d0b76296fba289809c9956831
3
+ size 4991230984
consolidated-00142-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cef80bb51b1692a6a716a99b56d133e8b97b8df59dbc5b04ffa5efbbaf3f2ae5
3
+ size 4991230992
consolidated-00143-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e9151c0647c4dfeef79f36695d57f0c03e79d0bc7a5308186cffe047c39f4df
3
+ size 4991230984
consolidated-00146-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aef66297a86d9edda2b0a20f735219341e8b1baeec292b05cc0f1c2bae8a56d5
3
+ size 4991230960
consolidated-00168-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c810e458cd054654126255116534eaec62e926e459c86e0fc62e4c23788b84e
3
+ size 4956267872
consolidated-00173-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0af5f8fb32487891d67fe710d7ea33c26cfedb45e19816070f3b55e613b95e8c
3
+ size 4956267904
consolidated-00179-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:312a69eb22f4341647d231b369d2f61bad4d42f727002d7265370ae4ba19df02
3
+ size 4991231000
consolidated-00183-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c99a69c024d96799bcbba281d8bd75da097a57c5c6d64b3cb67b9c6f63bfca09
3
+ size 4991231056
consolidated-00185-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d29e565f1d3f5112febffc8e565436b3ebfc64b3169311caf16fcfc1783b8d8a
3
+ size 4991230984
consolidated-00191-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fe6de41da75a7ee4cc46fb144c91057030f659a75610aae3b7b2c450b609012
3
+ size 4956267856
consolidated-00194-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8affc297d760af8a576e2565502082bd87801338e443c97ecb7f1b6baf336dd0
3
+ size 4991230984
consolidated-00198-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7fe3017dfbf92077f2e9ed09469909652a334ff5bcfab142bd383e85824f263
3
+ size 4991230904
consolidated-00202-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60888201e2d0b2491e354125c731ca9533657918e1aa632dd42ae7ddb6574af2
3
+ size 4991231008
consolidated-00205-of-00272.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c265a514403d35a5d2dd762ec9df351d75ca251fb16df17c1d75c0b4d17a117c
3
+ size 4956267864
consolidated.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
params.json ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dim": 7168,
3
+ "n_layers": 61,
4
+ "head_dim": 192,
5
+ "hidden_dim": 16384,
6
+ "n_heads": 128,
7
+ "n_kv_heads": 128,
8
+ "rope_theta": 10000.0,
9
+ "norm_eps": 1e-06,
10
+ "vocab_size": 131072,
11
+ "tied_embeddings": false,
12
+ "max_position_embeddings": 294912,
13
+ "max_seq_len": 262144,
14
+ "llama_4_scaling": {
15
+ "original_max_position_embeddings": 8192,
16
+ "beta": 0.1
17
+ },
18
+ "q_lora_rank": 1536,
19
+ "qk_rope_head_dim": 64,
20
+ "qk_nope_head_dim": 128,
21
+ "kv_lora_rank": 512,
22
+ "v_head_dim": 128,
23
+ "yarn": {
24
+ "original_max_position_embeddings": 8192,
25
+ "factor": 36,
26
+ "apply_scale": false,
27
+ "beta": 32,
28
+ "alpha": 1
29
+ },
30
+ "moe": {
31
+ "expert_parallel": 1,
32
+ "expert_model_parallel": 1,
33
+ "route_every_n": 1,
34
+ "first_k_dense_replace": 3,
35
+ "num_experts": 128,
36
+ "num_experts_per_tok": 4,
37
+ "num_expert_groups": 1,
38
+ "num_expert_groups_per_tok": 1,
39
+ "routed_scale": 1.0,
40
+ "expert_hidden_dim": 4096,
41
+ "num_shared_experts": 1
42
+ },
43
+ "vision_encoder": {
44
+ "image_token_id": 10,
45
+ "image_break_token_id": 12,
46
+ "image_end_token_id": 13,
47
+ "intermediate_size": 8192,
48
+ "num_hidden_layers": 48,
49
+ "num_attention_heads": 16,
50
+ "mm_projector_id": "patch_merge",
51
+ "spatial_merge_size": 2,
52
+ "hidden_size": 1664,
53
+ "num_channels": 3,
54
+ "image_size": 1540,
55
+ "max_image_size": 1540,
56
+ "patch_size": 14,
57
+ "rope_theta": 10000.0,
58
+ "add_pre_mm_projector_layer_norm": true,
59
+ "adapter_bias": false
60
+ }
61
+ }
processor_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_break_token": "[IMG_BREAK]",
3
+ "image_end_token": "[IMG_END]",
4
+ "image_processor": {
5
+ "crop_size": null,
6
+ "data_format": "channels_first",
7
+ "device": null,
8
+ "disable_grouping": null,
9
+ "do_center_crop": null,
10
+ "do_convert_rgb": true,
11
+ "do_normalize": true,
12
+ "do_pad": null,
13
+ "do_rescale": true,
14
+ "do_resize": true,
15
+ "image_mean": [
16
+ 0.48145466,
17
+ 0.4578275,
18
+ 0.40821073
19
+ ],
20
+ "image_processor_type": "PixtralImageProcessorFast",
21
+ "image_seq_length": null,
22
+ "image_std": [
23
+ 0.26862954,
24
+ 0.26130258,
25
+ 0.27577711
26
+ ],
27
+ "input_data_format": null,
28
+ "pad_size": null,
29
+ "patch_size": 14,
30
+ "processor_class": "PixtralProcessor",
31
+ "resample": 3,
32
+ "rescale_factor": 0.00392156862745098,
33
+ "return_tensors": null,
34
+ "size": {
35
+ "longest_edge": 1540
36
+ }
37
+ },
38
+ "image_token": "[IMG]",
39
+ "patch_size": 14,
40
+ "processor_class": "PixtralProcessor",
41
+ "spatial_merge_size": 2
42
+ }
special_tokens_map.json ADDED
The diff for this file is too large to render. See raw diff
 
tekken.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e29d19ea32eb7e26e6c0572d57cb7f9eca0f4420e0e0fe6ae1cf3be94da1c0d6
3
+ size 16753777
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:577575622324b2e099e2648be26bdeb5e5815ffe66d7004e9e3ddbf421db6bf1
3
+ size 17078110
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff