Update README.md with new model card content
Browse files
README.md
CHANGED
|
@@ -44,6 +44,15 @@ The following model checkpoints are provided by the Keras team. Full code exampl
|
|
| 44 |
|` llama3_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model with activation and weights quantized to int8. |
|
| 45 |
| `llama3_instruct_8b_en ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model. |
|
| 46 |
| `llama3_instruct_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model with activation and weights quantized to int8. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
## Prompts
|
| 49 |
|
|
|
|
| 44 |
|` llama3_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, base LLaMA 3 model with activation and weights quantized to int8. |
|
| 45 |
| `llama3_instruct_8b_en ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model. |
|
| 46 |
| `llama3_instruct_8b_en_int8 ` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3 model with activation and weights quantized to int8. |
|
| 47 |
+
| `llama3.1_8b` | 8.03B | 8 billion parameter, 32-layer, based LLaMA 3.1 model.|
|
| 48 |
+
| `llama3.1_guard_8b` | 8.03B | 8 billion parameter, 32-layer, LLaMA 3.1 fine-tuned for consent safety classification.|
|
| 49 |
+
| `llama3.1_instruct_8b` | 8.03B | 8 billion parameter, 32-layer, instruction tuned LLaMA 3.1.|
|
| 50 |
+
| `llama3.2_1b` | 1.5B | 1 billion parameter, 16-layer, based LLaMA 3.2 model.|
|
| 51 |
+
| `llama3.2_3b` | 3.6B | 3 billion parameter, 26-layer, based LLaMA 3.2 model.|
|
| 52 |
+
| `llama3.2_guard_1b` | 1.5B | 1 billion parameter, 16-layer, based LLaMA 3.2 model fine-tuned for consent safety classification. |
|
| 53 |
+
| `llama3.2_instruct_1b` | 1.5B | 1 billion parameter, 16-layer, instruction tuned LLaMA 3.2.|
|
| 54 |
+
| `llama3.2_instruct_3b` | 3.6B | 3 billion parameter, 28-layer, instruction tuned LLaMA 3.2.|
|
| 55 |
+
|
| 56 |
|
| 57 |
## Prompts
|
| 58 |
|