File size: 2,775 Bytes
b531ee2
506f8c7
 
b531ee2
 
 
 
506f8c7
b531ee2
 
506f8c7
 
b531ee2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15fe13c
 
 
 
 
b531ee2
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
datasets:
- TIGER-Lab/VisCode-Multi-679K
language:
- en
license: apache-2.0
tags:
- code
pipeline_tag: image-text-to-text
library_name: transformers
---

# VisCoder2-7B

[🏠 Project Page](https://tiger-ai-lab.github.io/VisCoder2) | [πŸ“– Paper](https://arxiv.org/abs/2510.23642) | [πŸ’» GitHub](https://github.com/TIGER-AI-Lab/VisCoder2) | [πŸ€— VisCode2](https://hf.co/collections/TIGER-Lab/viscoder2) 

**VisCoder2-7B** is a lightweight multi-language visualization coding model trained for **executable code generation, rendering, and iterative self-debugging**.  

---

## 🧠 Model Description

**VisCoder2-7B** is trained on the **VisCode-Multi-679K** dataset, a large-scale instruction-tuning dataset for executable visualization tasks across **12 programming language**. It addresses a core challenge in multi-language visualization: generating code that not only executes successfully but also produces semantically consistent visual outputs by aligning natural-language instructions and rendering results.

---

## πŸ“Š Main Results on VisPlotBench

We evaluate VisCoder2-7B on [**VisPlotBench**](https://huggingface.co/datasets/TIGER-Lab/VisPlotBench), which includes 888 executable visualization tasks spanning 8 languages, supporting both standard generation and multi-turn self-debugging.

![main_results](https://cdn-uploads.huggingface.co/production/uploads/64de37ee5e192985054be575/DRR3Y5vVS-KbniGJ3wmTi.png)

> **VisCoder2-7B** shows consistent performance across multiple languages and achieves notable improvements under the multi-round self-debug setting.
---

## πŸ“ Training Details

- **Base model**: Qwen2.5-Coder-7B-Instruct  
- **Framework**: [ms-swift](https://github.com/modelscope/swift)  
- **Tuning method**: Full-parameter supervised fine-tuning (SFT)  
- **Dataset**: [VisCode-Multi-679K](https://huggingface.co/datasets/TIGER-Lab/VisCode-Multi-679K)

---

## πŸ“– Citation

If you use VisCoder2-7B or related datasets in your research, please cite:

```bibtex
@article{ni2025viscoder2,
  title={VisCoder2: Building Multi-Language Visualization Coding Agents},
  author={Ni, Yuansheng and Cai, Songcheng and Chen, Xiangchao and Liang, Jiarong and Lyu, Zhiheng and Deng, Jiaqi and Zou, Kai and Nie, Ping and Yuan, Fei and Yue, Xiang and others},
  journal={arXiv preprint arXiv:2510.23642},
  year={2025}
}

@article{ni2025viscoder,
  title={VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation},
  author={Ni, Yuansheng and Nie, Ping and Zou, Kai and Yue, Xiang and Chen, Wenhu},
  journal={arXiv preprint arXiv:2506.03930},
  year={2025}
}
```

For evaluation scripts and more information, see our [GitHub repository](https://github.com/TIGER-AI-Lab/VisCoder2).