Statistics of different data formats
Hello, when I tried to train some llava-style models on this dataset, I noticed that when the model is asked to only output option labels for MCQ questions (e.g. during the evaluation of MMMU and AI2D benchmarks), the prompt following capability of the model trained on this dataset seems to be weaker than the models trained on e.g. LLaVA OneVision data, as the model will try to output a reasoning paragraph before it outputs the option label. I'm wondering if this is because the dataset contains a smaller ratio of MCQ data compared to other datasets like the LLaVA OneVision data. Do you have some statistics about this? In the blogpost, I also find that FineVision seems to have a slower increasing speed of performance on benchmarks like AI2D and MMStar. Is this related to lmms-eval using exact string matching and fixed answer extraction rules to get the final answer of MCQ problems, but FineVision-trained models tend to not following the "only output option label" prompt and lmms-eval failed to extract answer from its response at the beginning of the training?