Model Overview

This is an unofficial fork of nvidia/NVIDIA-Nemotron-Parse-v1.2 with updated modeling code and documentation only. Model weights are identical to the official release. The custom code (hf_nemotron_parse_modeling.py, etc.) has been updated to work with transformers>=4.57 (DynamicCache API changes). For the official model, license details, and support, refer to the original NVIDIA repository.

Description

NVIDIA Nemotron Parse v1.2 is designed to understand document semantics and extract text and tables elements with spatial grounding. Given an image, NVIDIA Nemotron Parse v1.2 produces structured annotations, including formatted text, bounding-boxes and the corresponding semantic classes, ordered according to the document's reading flow. It overcomes the shortcomings of traditional OCR technologies that struggle with complex document layouts with structural variability, and helps transform unstructured documents into actionable and machine-usable representations. This has several downstream benefits such as increasing the availability of training-data for Large Language Models (LLMs), improving the accuracy of extractor, curator, retriever and AI agentic applications, and enhancing document understanding pipelines.

This model is ready for commercial use.

Quick Start

Install dependencies in your environment

Install from link to requirements.txt as raw text:

URL="https://hf.co/BEE-spoke-data/NVIDIA-Nemotron-Parse-v1.2/raw/main/requirements.txt"
pip install -r $URL
# note: this includes vllm

Or install the core dependencies individually:

pip install albumentations beautifulsoup4 einops numpy opencv-python Pillow torch torchvision open_clip_torch "transformers<5.0.0" timm
# note: core deps, no vllm

If you are having issues resolving deps, try installing vllm and open_clip_torch separately first, then install the rest (and remove them from the install command)

Usage example

import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor, AutoTokenizer, GenerationConfig

# Load model and processor
model_path = "BEE-spoke-data/NVIDIA-Nemotron-Parse-v1.2"  # Or use a local path

model = AutoModel.from_pretrained(
    model_path,
    trust_remote_code=True,
    device_map="auto",
    dtype=torch.bfloat16,
).eval()
tokenizer = AutoTokenizer.from_pretrained(model_path)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)

# Load image
image = Image.open("path/to/your/image.jpg")
task_prompt = "</s><s><predict_bbox><predict_classes><output_markdown><predict_no_text_in_pic>"
# task_prompt = "</s><s><predict_bbox><predict_classes><output_markdown><predict_text_in_pic>"

# Process image
inputs = processor(images=[image], text=task_prompt, return_tensors="pt", add_special_tokens=False).to(model.device)

generation_config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True)
# Generate text
outputs = model.generate(**inputs, generation_config=generation_config)

# Decode the generated text
generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0]

Accelerator support: The HuggingFace code path works on any device supported by device_map="auto" — NVIDIA GPUs (CUDA), Apple Silicon (MPS), and CPU. vLLM requires CUDA.

Postprocessing

Post-processing is available directly via the processor — no extra imports needed:

from PIL import ImageDraw

# Full pipeline: parse output, map bboxes to original image coords, clean text
results = processor.post_process_generation(
    generated_text,
    image_size=(image.width, image.height),  # optional, for pixel-coord bboxes
    text_format="markdown",   # markdown | plain
    table_format="latex",     # latex | HTML | markdown
)

for elem in results:
    print(elem["class"], ":", elem["text"])

# Draw bounding boxes on the original image
draw = ImageDraw.Draw(image)
for elem in results:
    draw.rectangle(elem["bbox_original"], outline="red")

Each element in results is a dict with keys "class", "bbox" (normalised), "text" (post-processed), and "bbox_original" (pixel coordinates, present when image_size is provided).

The individual building blocks are also available as static methods if needed:

classes, bboxes, texts = processor.extract_classes_bboxes(generated_text)
pixel_bbox = processor.transform_bbox_to_original(bbox, image.width, image.height)
clean_text = processor.postprocess_text(text, cls="Table", table_format="markdown")
End-to-end example: OCR an image to a markdown file

A complete, copy-pasteable script that loads an image, runs OCR, and writes the extracted text to a .md file:

from pathlib import Path

import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor, GenerationConfig

model_path = "BEE-spoke-data/NVIDIA-Nemotron-Parse-v1.2"  # or path to local dir
image_path = Path("path/to/your/document.png")
assert image_path.is_file(), f"Cannot find {str(image_path)}"

# 1. Load model & processor
model = AutoModel.from_pretrained(
    model_path, trust_remote_code=True, device_map="auto", dtype=torch.bfloat16,
).eval()
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True)

# 2. Preprocess image
image = Image.open(image_path)
inputs = processor(
    images=[image],
    text="</s><s><predict_bbox><predict_classes><output_markdown><predict_no_text_in_pic>",
    return_tensors="pt",
    add_special_tokens=False,
).to(model.device)

# 3. Generate
with torch.inference_mode():
    output_ids = model.generate(**inputs, generation_config=generation_config)
raw_text = processor.batch_decode(output_ids, skip_special_tokens=True)[0]

# 4. Post-process: extract each element's text in reading order
results = processor.post_process_generation(
    raw_text,
    image_size=(image.width, image.height),
    text_format="markdown",
    table_format="markdown",
)

# 5. Combine all elements into a single markdown string and save
sections = []
for elem in results:
    text = elem["text"].strip()
    if not text:
        continue  # skip empty elements (e.g. pictures)
    cls = elem["class"]
    if cls == "Title":
        # model may already include "# " — strip to avoid duplication
        sections.append(f"# {text.lstrip('#').strip().replace('<br>', chr(10))}")
    elif cls in ("Section-header", "Section"):
        sections.append(f"## {text}")
    elif cls == "Caption":
        sections.append(f"*{text}*")
    else:
        sections.append(text)

output_path = image_path.with_suffix(".md")
output_path.write_text("\n\n".join(sections), encoding="utf-8")
print(f"Saved {len(sections)} elements to {output_path}")
Inference with vLLM

Inference with VLLM

Nemotron-Parse-v1.2 is available in vllm main and can be found in vllm/vllm-openai:v0.14.1 docker image.

Note: when running on A100/A10 we recommend running vllm serve with --attention-backend=TRITON_ATTN

You will need to install the following dependencies on top, and then follow the VLLM Inference example below:

pip install albumentations timm open_clip_torch

VLLM Inference example

Option 1: end-to-end python inference

from vllm import LLM, SamplingParams
from PIL import Image

def main():
    sampling_params = SamplingParams(
        temperature=0,
        top_k=1,
        repetition_penalty=1.1,
        max_tokens=8800, # leave room for chat template(s)
        skip_special_tokens=False,
    )

    llm = LLM(
        model="BEE-spoke-data/NVIDIA-Nemotron-Parse-v1.2",
        max_num_seqs=64,
        limit_mm_per_prompt={"image": 1},
        dtype="bfloat16",
        trust_remote_code=True,
    )

    image = Image.open("<YOUR-IMAGE-PATH>")

    prompts = [
        {  # Implicit prompt
            "prompt": "</s><s><predict_bbox><predict_classes><output_markdown><predict_no_text_in_pic>",
            "multi_modal_data": {
                "image": image
            },
        },
        {  # Explicit encoder/decoder prompt
            "encoder_prompt": {
                "prompt": "",
                "multi_modal_data": {
                    "image": image
                },
            },
            "decoder_prompt": "</s><s><predict_bbox><predict_classes><output_markdown><predict_no_text_in_pic>",
        },
    ]

    outputs = llm.generate(prompts, sampling_params)

    for output in outputs:
        prompt = output.prompt
        generated_text = output.outputs[0].text
        print(f"Decoder prompt: {prompt!r}, Generated text: {generated_text!r}")

if __name__ == "__main__":
    main()

Option 2: vllm serve

Alternatively, you can start a vllm server as:

vllm serve BEE-spoke-data/NVIDIA-Nemotron-Parse-v1.2 \
    --dtype bfloat16 \
    --max-num-seqs 8 \
    --limit-mm-per-prompt '{"image": 1}' \
    --trust-remote-code \
    --port 8000 \
    --chat-template chat_template.jinja

with chat_template.jinja provided in this repository. Then, you can run inference as:

import base64
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
)

# Read and base64-encode the image
with open(<your-image-path>, "rb") as f:
    img_b64 = base64.b64encode(f.read()).decode("utf-8")
prompt_text = "</s><s><predict_bbox><predict_classes><output_markdown><predict_no_text_in_pic>"

resp = client.chat.completions.create(
    model="BEE-spoke-data/NVIDIA-Nemotron-Parse-v1.2",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": prompt_text,
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/png;base64,{img_b64}",
                    },
                },
            ],
        }
    ],
    max_tokens=8800,
    temperature=0.0,
    extra_body={
        "repetition_penalty": 1.1,
        "top_k": 1,
        "skip_special_tokens": False,
    },
)
print(resp.choices[0].message.content)

Note: we recommend using the default prompt that extracts bounding boxes, classes, and text in markdown formatting for all use cases (</s><s><predict_bbox><predict_classes><output_markdown><predict_no_text_in_pic> or </s><s><predict_bbox><predict_classes><output_markdown><predict_text_in_pic>). If necessary, optionally the prompt that omits text extraction and only outputs bounding boxes and classes could be used: </s><s><predict_bbox><predict_classes><output_no_text><predict_no_text_in_pic>.

Logits processors

With Nemotron-Parse-v1.2 we share 2 logits processors available in logitsprocessors/ dir for vllm and in hf_logits_processor.py for the python model. NemotronParseRepetitionStopProcessor - when used during generation, detects repeating n-grams and forces the model to close the block when detecting potential hallucination. NemotronParseTableInsertionLogitsProcessor - forces every block to follow a table structure (useful if, e.g., you are running the model on table image crops)

Please refer to the example_with_processor.py for example usage with python model. With vllm, you can provide these as arguments to vllm serve, after exporting logitsprocs/ to PYTHONPATH, e.g.:

vllm serve BEE-spoke-data/NVIDIA-Nemotron-Parse-v1.2 \
  --dtype bfloat16 \
  --max-num-seqs 4 \
  --limit-mm-per-prompt '{"image": 1}' \
  --attention-backend=TRITON_ATTN \
  --trust-remote-code \
  --logits-processors nemotron_parse_vllm_logitprocs:NemotronParseTableInsertionLogitsProcessor \
  --port 8000

An example of inference with vllm openai server is available in vllm_example.py

License/Terms of Use

Governing Terms: Your use of this model is governed by the: NVIDIA Nemotron Open Model License. Use of the tokenizer included in this model is governed by the CC-BY-4.0 license.

Model card details (architecture, training data, ethical considerations)

Deployment Geography

Global

Use Case

NVIDIA Nemotron Parse v1.2 will be capable of comprehensive text understanding and document structure understanding. It will be used in retriever and curator solutions. Its text extraction datasets and capabilities will help with LLM and VLM training, as well as improve run-time inference accuracy of VLMs. The NVIDIA Nemotron Parse v1.2 model will perform text extraction from PDF and PPT documents. The NVIDIA Nemotron Parse v1.2 can classify the objects (title, section, caption, index, footnote, lists, tables, bibliography, image) in a given document, and provide bounding boxes with coordinates.

Release Date

Hugging Face [02/17/2026] via [URL]

References(s)

Model Architecture

Architecture Type: Transformer-based vision-encoder-decoder model

Network Architecture:

  • Vision Encoder: ViT-H model (https://huggingface.co/nvidia/C-RADIO)
  • Adapter Layer: 1D convolutions & norms to compress dimensionality and sequence length of the latent space (1280 tokens to 320 tokens)
  • Decoder: mBart [1] 10 blocks
  • Tokenizer: Use of the tokenizer included in this model is governed by the CC-BY-4.0 license
  • Number of Parameters: < 1B

Input

  • Input Type: Image, Text
  • Input Type(s): Red, Green, Blue (RGB) + Prompt (String)
  • Input Parameters: Two-Dimensional (2D), One-Dimensional (1D)
  • Other Properties Related to Input:
    • Max Input Resolution (Width, Height): 1664, 2048
    • Min Input Resolution (Width, Height): 1024, 1280
  • Channel Count: 3

Output

  • Output Type: Text
  • Output Format: String
  • Output Parameters: One-Dimensional (1D)
  • Other Properties Related to Output: Nemotron-parse output format is a string which encodes text content (formatted or not) as well as bounding boxes and class attributes.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration

Runtime Engine(s):

  • TensorRT-LLM
  • vLLM

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Blackwell
  • NVIDIA Hopper
  • NVIDIA Turing

Supported Operating System(s):

  • [Linux]

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s)

Nemotron Parse 1.2

Training, Testing, and Evaluation Datasets

Training Dataset

** Image Training Data Size

  • [1 Million to 1 Billion Images]

** Text Training Data Size

  • [1 Billion to 10 Trillion Tokens]

** Data Collection Method by dataset

  • Hybrid: Automated, Human, Synthetic

** Labeling Method by dataset

  • Hybrid: Automated, Human, Synthetic

Properties (Quantity, Dataset Descriptions, Sensor(s)): The training set contains millions of image-text items, aggregated across many large document and table datasets totaling several terabytes of data. The data consists of document-page and table images paired with OCR text, bounding boxes, and layout labels, drawn from real-world sources (scientific papers, PDFs, Wikipedia pages) as well as fully synthetic tables and word/character renderings. Modalities are primarily images plus associated text and structural annotations; content spans public-domain resources, and synthetic data. Images are obtained by rendering digital documents or generating synthetic layouts, and annotations come from OCR/layout models, third-party OCR services, and human labeling.

Inference

Acceleration Engine: Tensor(RT)-LLM, vLLM Test Hardware:

  • H100
  • A100

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.

For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, Safety & Security, and Privacy Subcards Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.

Downloads last month
74
Safetensors
Model size
0.9B params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support