AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model
Abstract
AndesVL, a suite of mobile-side MLLMs with reduced parameters, achieves top-tier performance across various benchmarks compared to similar-scale models.
In recent years, while cloud-based MLLMs such as QwenVL, InternVL, GPT-4o, Gemini, and Claude Sonnet have demonstrated outstanding performance with enormous model sizes reaching hundreds of billions of parameters, they significantly surpass the limitations in memory, power consumption, and computing capacity of edge devices such as mobile phones. This paper introduces AndesVL, a suite of mobile-side MLLMs with 0.6B to 4B parameters based on Qwen3's LLM and various visual encoders. We comprehensively outline the model architectures, training pipeline, and training data of AndesVL, which achieves first-tier performance across a wide range of open-source benchmarks, including fields such as text-rich image understanding, reasoning and math, multi-image comprehension, general VQA, hallucination mitigation, multilingual understanding, and GUI-related tasks when compared with state-of-the-art models of a similar scale. Furthermore, we introduce a 1+N LoR
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency (2025)
- SAIL-VL2 Technical Report (2025)
- BcQLM: Efficient Vision-Language Understanding with Distilled Q-Gated Cross-Modal Fusion (2025)
- Qwen3-Omni Technical Report (2025)
- Competitive Audio-Language Models with Data-Efficient Single-Stage Training on Public Data (2025)
- MANZANO: A Simple and Scalable Unified Multimodal Model with a Hybrid Vision Tokenizer (2025)
- MindVL: Towards Efficient and Effective Training of Multimodal Large Language Models on Ascend NPUs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 8
Browse 8 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
