Update README.md
Browse files
README.md
CHANGED
|
@@ -45,6 +45,10 @@ best.save("fig.tex")
|
|
| 45 |
```
|
| 46 |
|
| 47 |
## Changes from DeTi*k*Zify<sub>v1</sub>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
### Architecture
|
| 50 |
Similar to DeTi*k*Zify<sub>v1</sub>, DeTi*k*Zify<sub>v2</sub> uses a SigLIP
|
|
@@ -59,8 +63,9 @@ frozen but fully fine-tuned with the rest of the model.
|
|
| 59 |
For pretraining, we switch from MetaFig to the much larger
|
| 60 |
[ArXivCap](https://huggingface.co/datasets/MMInstruction/ArxivCap) dataset and
|
| 61 |
extract 1 million (figure, caption, OCR) tuples for pretraining the modality
|
| 62 |
-
connector. For fine-tuning, we create
|
| 63 |
-
|
|
|
|
| 64 |
|
| 65 |
We also train a new model called
|
| 66 |
[UltraSketch](https://huggingface.co/nllg/ultrasketch) to generate synthetic
|
|
|
|
| 45 |
```
|
| 46 |
|
| 47 |
## Changes from DeTi*k*Zify<sub>v1</sub>
|
| 48 |
+
We document all changes between DeTikZify<sub>v1</sub> and
|
| 49 |
+
DeTikZify<sub>v2</sub> in our paper, "[TikZero: Zero-Shot Text-Guided Graphics
|
| 50 |
+
Program Synthesis](https://arxiv.org/abs/2503.11509)". For convenience, they
|
| 51 |
+
are also listed below.
|
| 52 |
|
| 53 |
### Architecture
|
| 54 |
Similar to DeTi*k*Zify<sub>v1</sub>, DeTi*k*Zify<sub>v2</sub> uses a SigLIP
|
|
|
|
| 63 |
For pretraining, we switch from MetaFig to the much larger
|
| 64 |
[ArXivCap](https://huggingface.co/datasets/MMInstruction/ArxivCap) dataset and
|
| 65 |
extract 1 million (figure, caption, OCR) tuples for pretraining the modality
|
| 66 |
+
connector. For fine-tuning, we create the new
|
| 67 |
+
[DaTi*k*Z<sub>v3</sub>](https://huggingface.co/datasets/nllg/datikz-v3) dataset
|
| 68 |
+
with over 450k Ti*k*Z drawings.
|
| 69 |
|
| 70 |
We also train a new model called
|
| 71 |
[UltraSketch](https://huggingface.co/nllg/ultrasketch) to generate synthetic
|