Commit
·
11aae96
1
Parent(s):
9f98b7a
Update README.md
Browse files
README.md
CHANGED
|
@@ -28,10 +28,10 @@ The models are trained on the 🤗 [TableInstruct Dataset](https://huggingface.c
|
|
| 28 |
|
| 29 |
|
| 30 |
## Training Procedure
|
| 31 |
-
The models are fine-tuned with the TableInstruct dataset using LongLoRA (7B), fully fine-tuning version as the base model, which replaces the vanilla attention mechanism of the original Llama-2 (7B) with shift short attention. The training takes 9 days on 48*A100. Check out our paper for more details.
|
| 32 |
|
| 33 |
## Evaluation
|
| 34 |
-
The models are evaluated on 8 in-domain datasets of 8 tasks and 6 out-of-domain datasets of 4 tasks.
|
| 35 |
|
| 36 |
|
| 37 |
## Usage
|
|
|
|
| 28 |
|
| 29 |
|
| 30 |
## Training Procedure
|
| 31 |
+
The models are fine-tuned with the TableInstruct dataset using LongLoRA (7B), fully fine-tuning version as the base model, which replaces the vanilla attention mechanism of the original Llama-2 (7B) with shift short attention. The training takes 9 days on a 48*A100 cluster. Check out our paper for more details.
|
| 32 |
|
| 33 |
## Evaluation
|
| 34 |
+
The models are evaluated on 8 in-domain datasets of 8 tasks and 6 out-of-domain datasets of 4 tasks.
|
| 35 |
|
| 36 |
|
| 37 |
## Usage
|