Papers
arxiv:2510.09189

LLaMAX2: Your Translation-Enhanced Model also Performs Well in Reasoning

Published on Oct 10
· Submitted by FeiYuan on Oct 14
Authors:
,
,
,
,
,

Abstract

A novel translation-enhanced recipe using layer-selective tuning on parallel data improves translation performance in both high- and low-resource languages while maintaining reasoning proficiency.

AI-generated summary

General Large Language Models (LLMs) excel in reasoning, but those enhanced for translation struggle with reasoning tasks. To address this, we propose a novel translationenhanced recipe that begins with instruct models and applies layer-selective tuning only on parallel data. Following this pipeline, we introduce the Qwen3-XPlus models, which demonstrate significant improvements in translation performance across both high- and lowresource languages, achieving 15+ spBLEU and 40+ xComet in low-resource languages, like Swahili. Interestingly, training only with small parallel datasets, Qwen3-XPlus achieves an average improvement of 1+ points on 7 multilingual tasks while maintaining proficiency comparable to the Qwen3 instruct model in 15 popular reasoning datasets. This work offers a promising approach to multilingual enhancement, significantly reducing complexity and enhancing accessibility for a wider range of languages. The code and model are publicly available.

Community

Paper submitter

new translation enhanced pipeline

  1. starting with an instruct model (Qwen3 instruct)
  2. training with small parallel data
  3. enhancing translation & still performing well in reasoning

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.09189 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.09189 in a Space README.md to link it from this page.

Collections including this paper 1