Update README.md
Browse files
README.md
CHANGED
|
@@ -17,7 +17,7 @@ metrics:
|
|
| 17 |
|
| 18 |
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
|
| 19 |
|
| 20 |
-
The original fp32 model comes from the fine-tuned model [
|
| 21 |
|
| 22 |
Below linear modules (40/193) are fallbacked to fp32 for less than 1% relative accuracy loss:
|
| 23 |
|
|
|
|
| 17 |
|
| 18 |
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
|
| 19 |
|
| 20 |
+
The original fp32 model comes from the fine-tuned model [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn).
|
| 21 |
|
| 22 |
Below linear modules (40/193) are fallbacked to fp32 for less than 1% relative accuracy loss:
|
| 23 |
|