geoqa-r1v-8k-mixup / README.md
Yuting6's picture
Update README.md
fa0c617 verified
metadata
dataset_info:
  features:
    - name: images
      sequence: image
    - name: problem
      dtype: string
    - name: answer
      dtype: string
  splits:
    - name: train
      num_bytes: 55927614.59
      num_examples: 8030
    - name: test
      num_bytes: 88598361.32
      num_examples: 3040
  download_size: 107933739
  dataset_size: 144525975.91
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

license: mit datasets: - Yuting6/geoqa-r1v-augmentation - Yuting6/math-8k-augmentation - Yuting6/m3cot-augmentation - Yuting6/TQA-augmentation - Yuting6/Geo3k-augmentation - Yuting6/geoqa-r1v-noise - Yuting6/geoqa-r1v-crop - Yuting6/geoqa-r1v-blur - Yuting6/geoqa-r1v-8k-rotated - Yuting6/geoqa-r1v-8k-mixup base_model: - Qwen/Qwen2.5-VL-7B-Instruct

Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning

Paper Title and Link

The model was presented in the paper Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning. You can also find the paper on arXiv: Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning (arXiv:2506.09736)

Paper Abstract

Vision-Matters is a simple visual perturbation framework that can be easily integrated into existing post-training pipelines including SFT, DPO, and GRPO. Our findings highlight the critical role of visual perturbation: better reasoning begins with better seeing.