zhiyucheng commited on
Commit
f6aa122
·
unverified ·
1 Parent(s): 1d312a2

update readme, add subcards

Browse files
Files changed (5) hide show
  1. README.md +66 -32
  2. bias.md +4 -0
  3. explainability.md +13 -0
  4. privacy.md +8 -0
  5. safety.md +7 -0
README.md CHANGED
@@ -18,22 +18,14 @@ tags:
18
 
19
  ### Description
20
 
21
- Llama Nemotron Nano VL FP4 QAD model is the quantized version of the Nvidia's Llama Nemotron Nano VL model, which is an auto-regressive vision language model that uses an optimized transformer architecture. For more information, please check [here](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1). The NVIDIA Llama Nemotron Nano VL FP4 QAD model is quantized with [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer).
22
 
23
- This model was trained on commercial images during Quantization-aware Distillation (QAD) stage.
24
 
25
  This model is ready for commercial/non-commercial use.
26
 
27
-
28
  ### License/Terms of Use
29
- **Governing Terms:**
30
-
31
- Your use of the model is governed by the [NVIDIA Open License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). Additional Information: Llama 3.1 Community Model License; Built with Llama.
32
-
33
- **Additional Information:**
34
-
35
- [Llama 3.1 Community Model License](https://www.llama.com/llama3_1/license/); Built with Llama.
36
-
37
 
38
  ### Deployment Geography:
39
 
@@ -41,14 +33,11 @@ Global
41
 
42
  ### Use Case:
43
 
44
- Customers: AI foundry enterprise customers
45
-
46
- Use Cases: Image summarization. Text-image analysis, Optical Character Recognition, Interactive Q&A on images, Text Chain-of-Thought reasoning
47
-
48
 
49
  ## Release Date:
50
 
51
- - Hugging Face [October 7th, 2025]
52
 
53
  ## Model Architecture:
54
 
@@ -60,18 +49,20 @@ Vision Encoder: [C-RADIOv2-H](https://huggingface.co/nvidia/C-RADIOv2-VLM-H)
60
 
61
  Language Encoder: Llama-3.1-8B-Instruct
62
 
 
 
63
  ### Input
64
 
65
  Input Type(s): Image, Text
66
- - Input Images
67
- - Language Supported: English only
68
 
69
  Input Format(s): Image (Red, Green, Blue (RGB)), and Text (String)
70
 
71
- Input Parameters: Image (2D), Text (1D)
72
 
73
  Other Properties Related to Input:
74
 
 
75
  - Input + Output Token: 16K
76
  - Maximum Resolution: Determined by a 12-tile layout constraint, with each tile being 512 × 512 pixels. This supports aspect ratios such as:
77
  - 4 × 3 layout: up to 2048 × 1536 pixels
@@ -87,12 +78,10 @@ Output Type(s): Text
87
 
88
  Output Formats: String
89
 
90
- Output Parameters: 1D
91
 
92
  Other Properties Related to Output: Input + Output Token: 16K
93
 
94
-
95
-
96
  Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
97
 
98
  ### Software Integration
@@ -100,13 +89,15 @@ Runtime Engine(s): vLLM<br>
100
  Supported Hardware Microarchitecture Compatibility: B100/B200<br>
101
  Supported Operating System(s): Linux<br>
102
 
 
 
103
  ### Model Versions:
104
  Llama-3.1-Nemotron-Nano-VL-8B-V1-FP4-QAD
105
 
106
  ## Quick Start
107
 
108
  ### Install Dependencies
109
- ```
110
  pip install transformers accelerate timm einops open-clip-torch
111
  ```
112
 
@@ -119,16 +110,43 @@ python3 -m vllm.entrypoints.openai.api_server --model nvidia/Llama-3.1-Nemotron-
119
  ```
120
 
121
 
122
- ## Training/Evaluation Dataset:
 
 
 
 
 
 
 
 
 
 
123
  NV-Pretraining and NV-CosmosNemotron-SFT were used for training and evaluation
124
 
125
- Data Collection Method by dataset (Training and Evaluation): <br>
126
  * Hybrid: Human, Synthetic <br>
127
 
128
- Labeling Method by dataset (Training and Evaluation): <br>
129
  * Hybrid: Human, Synthetic <br>
130
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
 
 
 
 
 
132
  Additionally, the dataset collection (for training and evaluation) consists of a mix of internal and public datasets designed for training and evaluation across various tasks. It includes: <br>
133
  • Internal datasets built with public commercial images and internal labels, supporting tasks like conversation modeling and document analysis.<br>
134
  • Public datasets sourced from publicly available images and annotations, adapted for tasks such as image captioning and visual question answering.<br>
@@ -136,18 +154,34 @@ Additionally, the dataset collection (for training and evaluation) consists of a
136
  • Specialized datasets for safety alignment, function calling, and domain-specific tasks (e.g., science diagrams, financial question answering).<br>
137
 
138
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
- # Inference:
 
141
  **Engine:** vLLM <br>
142
  **Test Hardware:** <br>
143
  * 1x NVIDIA B100/B200
144
 
145
 
146
  ## Ethical Considerations:
147
- NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
148
 
 
 
 
149
  Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
150
-
151
- Outputs generated by these models may contain political content or other potentially misleading information, issues with content security and safety, or unwanted bias that is independent of our oversight.
152
-
153
-
 
18
 
19
  ### Description
20
 
21
+ Llama-3.1-Nemotron-Nano-VL-8B-V1-FP4-QAD is the quantized version of the NVIDIA Llama Nemotron Nano VL model, which is an auto-regressive vision language model that uses an optimized transformer architecture. For more information, please check [here](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1). The NVIDIA Llama Nemotron Nano VL FP4 QAD model is quantized with [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer).
22
 
23
+ This model was trained on commercial images using [Quantization-aware Distillation (QAD)](https://developer.nvidia.com/blog/how-quantization-aware-training-enables-low-precision-accuracy-recovery/).
24
 
25
  This model is ready for commercial/non-commercial use.
26
 
 
27
  ### License/Terms of Use
28
+ **Governing Terms:** Your use of the model is governed by the [NVIDIA Open License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). **Additional Information:** [Llama 3.1 Community Model License](https://www.llama.com/llama3_1/license/). Built with Llama.
 
 
 
 
 
 
 
29
 
30
  ### Deployment Geography:
31
 
 
33
 
34
  ### Use Case:
35
 
36
+ The intended users of this model are AI foundry enterprise customers, as well as researchers or developers. This model may be used for image summarization, text-image analysis, Optical Character Recognition, interactive Q&A on images, and Chain-of-Thought reasoning.
 
 
 
37
 
38
  ## Release Date:
39
 
40
+ - Hugging Face [October 7th, 2025] via https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1-FP4-QAD
41
 
42
  ## Model Architecture:
43
 
 
49
 
50
  Language Encoder: Llama-3.1-8B-Instruct
51
 
52
+ **Number of model parameters:** 8 billion
53
+
54
  ### Input
55
 
56
  Input Type(s): Image, Text
57
+
 
58
 
59
  Input Format(s): Image (Red, Green, Blue (RGB)), and Text (String)
60
 
61
+ Input Parameters: Image (Two-Dimensional - 2D), Text (One-Dimensional - 1D)
62
 
63
  Other Properties Related to Input:
64
 
65
+ - Language Supported: English only
66
  - Input + Output Token: 16K
67
  - Maximum Resolution: Determined by a 12-tile layout constraint, with each tile being 512 × 512 pixels. This supports aspect ratios such as:
68
  - 4 × 3 layout: up to 2048 × 1536 pixels
 
78
 
79
  Output Formats: String
80
 
81
+ Output Parameters: One-Dimensional (1D)
82
 
83
  Other Properties Related to Output: Input + Output Token: 16K
84
 
 
 
85
  Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
86
 
87
  ### Software Integration
 
89
  Supported Hardware Microarchitecture Compatibility: B100/B200<br>
90
  Supported Operating System(s): Linux<br>
91
 
92
+ The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
93
+
94
  ### Model Versions:
95
  Llama-3.1-Nemotron-Nano-VL-8B-V1-FP4-QAD
96
 
97
  ## Quick Start
98
 
99
  ### Install Dependencies
100
+ ```bash
101
  pip install transformers accelerate timm einops open-clip-torch
102
  ```
103
 
 
110
  ```
111
 
112
 
113
+ ## Training Dataset
114
+
115
+ **Data Modality:**
116
+ - Image
117
+ - Text
118
+
119
+ **Image Training Data Size:**
120
+ - 1 Million to 1 Billion Images
121
+
122
+ **Text Training Data Size:**
123
+ - Less than a Billion Tokens
124
  NV-Pretraining and NV-CosmosNemotron-SFT were used for training and evaluation
125
 
126
+ Data Collection Method by dataset: <br>
127
  * Hybrid: Human, Synthetic <br>
128
 
129
+ Labeling Method by dataset: <br>
130
  * Hybrid: Human, Synthetic <br>
131
 
132
+ Properties:
133
+ The dataset collection (for training and evaluation) consists of a mix of internal and public datasets designed for training and evaluation across various tasks. It includes: <br>
134
+ • Internal datasets built with public commercial images and internal labels, supporting tasks like conversation modeling and document analysis.<br>
135
+ • Public datasets sourced from publicly available images and annotations, adapted for tasks such as image captioning and visual question answering.<br>
136
+ • Synthetic datasets generated programmatically for specific tasks like tabular data understanding.<br>
137
+ • Specialized datasets for safety alignment, function calling, and domain-specific tasks (e.g., science diagrams, financial question answering).<br>
138
+
139
+ ## Evaluation Dataset:
140
+
141
+ NV-Pretraining and NV-CosmosNemotron-SFT were used for training and evaluation.
142
+
143
+ Data Collection Method by dataset: <br>
144
+ * Hybrid: Human, Synthetic <br>
145
 
146
+ Labeling Method by dataset: <br>
147
+ * Hybrid: Human, Synthetic <br>
148
+
149
+ Properties:
150
  Additionally, the dataset collection (for training and evaluation) consists of a mix of internal and public datasets designed for training and evaluation across various tasks. It includes: <br>
151
  • Internal datasets built with public commercial images and internal labels, supporting tasks like conversation modeling and document analysis.<br>
152
  • Public datasets sourced from publicly available images and annotations, adapted for tasks such as image captioning and visual question answering.<br>
 
154
  • Specialized datasets for safety alignment, function calling, and domain-specific tasks (e.g., science diagrams, financial question answering).<br>
155
 
156
 
157
+ ## Evaluation Benchmarks:
158
+
159
+ | Benchmark | Score |
160
+ | --- | --- |
161
+ | MMMU Val with chatGPT as a judge | 47.9% |
162
+ | AI2D | 85.0% |
163
+ | ChartQA | 86.5% |
164
+ | InfoVQA Val | 77.6% |
165
+ | OCRBench | 836 |
166
+ | OCRBenchV2 English | 59.5% |
167
+ | OCRBenchV2 Chinese | 38.0% |
168
+ | DocVQA val | 91.5% |
169
+ | VideoMME<sup>*</sup> | 54.6% |
170
+
171
+ <sup>*</sup>Calculated with 1 tile per image
172
+ The evaluation was done with FP4 simulated quantization on H100.
173
 
174
+
175
+ ## Inference
176
  **Engine:** vLLM <br>
177
  **Test Hardware:** <br>
178
  * 1x NVIDIA B100/B200
179
 
180
 
181
  ## Ethical Considerations:
 
182
 
183
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
184
+ For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
185
+ Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.
186
  Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
187
+ Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
 
 
 
bias.md ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ Field | Response
2
+ :---------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3
+ Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | We actively considered participation from adversely impacted groups and protected classes during model design and testing by engaging diverse stakeholders, reviewing data for representation, and evaluating outputs for bias. Feedback channels were provided throughout development.
4
+ Measures taken to mitigate against unwanted bias: | We took several steps to reduce unwanted bias, including:<br>- **Evaluating** the model’s answers with regard to fairness for different groups<br>- Using tools to **identify** and measure unfairness.
explainability.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Field | Response
2
+ :------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
3
+ Intended Application & Domain: | Visual Question Answering
4
+ Model Type: | Transformer
5
+ Intended Users: | Generative AI creators working with conversational AI models and image content.
6
+ Output: | Text (Responds to posed question, stateful - remembers previous answers)
7
+ Describe how the model works: | Chat based on image/text
8
+ Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
9
+ Technical Limitations: | <br>**Context Length:** Supports up to 16,000 tokens total (input + output). If exceeded, input is truncated from the start, and generation ends with an EOS token. Longer prompts may risk performance loss.<br><br>If the model fails (e.g., generates incorrect responses, repeats, or gives poor responses), issues are diagnosed via benchmarks, human review, and internal debugging tools. Only use NVIDIA provided models that use safetensors format. <br><br>Do not expose the vLLM host to a network where any untrusted connections may reach the host. Only use NVIDIA provided models that use safetensors format.
10
+ Verified to have met prescribed NVIDIA quality standards: | Yes
11
+ Performance Metrics: | MMMU Val with chatGPT as a judge, AI2D, ChartQA Test, InfoVQA Val, OCRBench, OCRBenchV2 English, OCRBenchV2 Chinese, DocVQA val, VideoMME (16 frames), SlideQA (F1)
12
+ Potential Known Risks: | The Model may produce output that is biased, toxic, or incorrect responses. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The Model may also generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text, producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.<br>While we have taken safety and security into account and are continuously improving, outputs may still contain political content, misleading information, or unwanted bias beyond our control.
13
+ Licensing: | **Governing Terms:**<br>Your use of the software container and model is governed by the [NVIDIA Software and Model Evaluation License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-and-model-evaluation-license/).<br><br>**Additional Information:**<br>[Llama 3.1 Community Model License](https://www.llama.com/llama3_1/license/); Built with Llama.
privacy.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Field | Response
2
+ :----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
3
+ Generatable or reverse engineerable personal data? | None
4
+ Personal data used to create this model? | None
5
+ How often is dataset reviewed? | Before Every Release | Yes
6
+ Does data labeling (annotation, metadata) comply with privacy laws? | Yes
7
+ Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data. Applicable Privacy Policy: https://www.nvidia.com/en-us/about-nvidia/privacy-policy/
8
+
safety.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Field | Response
2
+ :---------------------------------------------------|:----------------------------------
3
+ Model Application(s): | - Extracting and understanding information from text and images in documents (OCR, tables, charts, diagrams, math expressions)<br>- Recognizing objects, attributes, and semantic relationships in images<br>- Interactive Q&A based on images and text<br>- Analyzing and summarizing similarities and differences between images
4
+ Describe the life critical impact (if present). | Not Applicable
5
+ Use Case Restrictions: | Governing Terms:Your use of the model is governed by the [NVIDIA Open License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). Additional Information: Llama 3.1 Community Model License; Built with Llama.. Additional Information: [Llama 3.1 Community Model License](https://www.llama.com/llama3_1/license/); Built with Llama.
6
+ Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.
7
+