add comments
Browse files
blog/openvino_vlm/openvino-vlm.md
CHANGED
|
@@ -138,13 +138,18 @@ Quantizing activations adds small errors that can build up and affect accuracy,
|
|
| 138 |
You can now run inference with your quantized model :
|
| 139 |
|
| 140 |
```python
|
| 141 |
-
# Generate outputs with quantized model
|
| 142 |
generated_ids = q_model.generate(**inputs, max_new_tokens=500)
|
| 143 |
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
| 144 |
print(generated_texts[0])
|
| 145 |
```
|
| 146 |
-
Try the complete notebook [here](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb).
|
| 147 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 148 |
|
| 149 |
## Conclusion
|
| 150 |
|
|
|
|
| 138 |
You can now run inference with your quantized model :
|
| 139 |
|
| 140 |
```python
|
|
|
|
| 141 |
generated_ids = q_model.generate(**inputs, max_new_tokens=500)
|
| 142 |
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
| 143 |
print(generated_texts[0])
|
| 144 |
```
|
|
|
|
| 145 |
|
| 146 |
+
If you have a recent Intel laptop, Intel AI PC, or Intel discrete GPU, you can load the model on GPU by adding `device="gpu"` when loading your model:
|
| 147 |
+
|
| 148 |
+
```python
|
| 149 |
+
model = OVModelForVisualCausalLM.from_pretrained(model_id, device="gpu")
|
| 150 |
+
```
|
| 151 |
+
|
| 152 |
+
Try the complete notebook [here](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb).
|
| 153 |
|
| 154 |
## Conclusion
|
| 155 |
|