VisCoder2-7B / README.md
yuanshengni's picture
Update README.md
15fe13c verified
metadata
base_model:
  - Qwen/Qwen2.5-Coder-7B-Instruct
datasets:
  - TIGER-Lab/VisCode-Multi-679K
language:
  - en
license: apache-2.0
tags:
  - code
pipeline_tag: image-text-to-text
library_name: transformers

VisCoder2-7B

🏠 Project Page | πŸ“– Paper | πŸ’» GitHub | πŸ€— VisCode2

VisCoder2-7B is a lightweight multi-language visualization coding model trained for executable code generation, rendering, and iterative self-debugging.


🧠 Model Description

VisCoder2-7B is trained on the VisCode-Multi-679K dataset, a large-scale instruction-tuning dataset for executable visualization tasks across 12 programming language. It addresses a core challenge in multi-language visualization: generating code that not only executes successfully but also produces semantically consistent visual outputs by aligning natural-language instructions and rendering results.


πŸ“Š Main Results on VisPlotBench

We evaluate VisCoder2-7B on VisPlotBench, which includes 888 executable visualization tasks spanning 8 languages, supporting both standard generation and multi-turn self-debugging.

main_results

VisCoder2-7B shows consistent performance across multiple languages and achieves notable improvements under the multi-round self-debug setting.


πŸ“ Training Details

  • Base model: Qwen2.5-Coder-7B-Instruct
  • Framework: ms-swift
  • Tuning method: Full-parameter supervised fine-tuning (SFT)
  • Dataset: VisCode-Multi-679K

πŸ“– Citation

If you use VisCoder2-7B or related datasets in your research, please cite:

@article{ni2025viscoder2,
  title={VisCoder2: Building Multi-Language Visualization Coding Agents},
  author={Ni, Yuansheng and Cai, Songcheng and Chen, Xiangchao and Liang, Jiarong and Lyu, Zhiheng and Deng, Jiaqi and Zou, Kai and Nie, Ping and Yuan, Fei and Yue, Xiang and others},
  journal={arXiv preprint arXiv:2510.23642},
  year={2025}
}

@article{ni2025viscoder,
  title={VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation},
  author={Ni, Yuansheng and Nie, Ping and Zou, Kai and Yue, Xiang and Chen, Wenhu},
  journal={arXiv preprint arXiv:2506.03930},
  year={2025}
}

For evaluation scripts and more information, see our GitHub repository.