ResearchArcade
					Collection
				
				10 items
				• 
				Updated
					
				
| venue
				 stringclasses 5
				values | paper_openreview_id
				 stringclasses 342
				values | paragraph_idx
				 int64 1 314 | section
				 stringlengths 2 2.38k | content
				 stringlengths 1 33.1k ⌀ | 
|---|---|---|---|---|
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 1 | 
	Title | 
	SYNERGISTIC APPROACH FOR SIMULTANEOUSOPTIMIZATION OF MONOLINGUAL, CROSS-LINGUAL,AND MULTILINGUAL INFORMATION RETRIEVAL | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 2 | 
	Abstract | 
	Information retrieval across different languages is an increasingly important challenge in natural language processing. Recent approaches based on multilingualpre-trained language models have achieved remarkable success, yet they oftenoptimize for either monolingual, cross-lingual, or multilingual retrieval performance at the expense of others. This paper proposes a novel hybrid batch trainingstrategy to simultaneously improve zero-shot retrieval performance across monolingual, cross-lingual, and multilingual settings while mitigating language bias.The approach fine-tunes multilingual language models using a mix of monolingualand cross-lingual question-answer pair batches sampled based on dataset size.Experiments on XQuAD-R, MLQA-R, and MIRACL benchmark datasets showthat the proposed method consistently achieves comparable or superior resultsin zero-shot retrieval across various languages and retrieval tasks compared tomonolingual-only or cross-lingual-only training. Hybrid batch training also substantially reduces language bias in multilingual retrieval compared to monolingualtraining. These results demonstrate the effectiveness of the proposed approach forlearning language-agnostic representations that enable strong zero-shot retrievalperformance across diverse languages. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 3 | 
	1 INTRODUCTION | 
	Information retrieval (IR) across different languages is an increasingly important challenge in naturallanguage processing. However, optimizing information retrieval systems for multilingual scenarios isnot a straightforward task, as it requires considering multiple distinct retrieval settings, each withits own set of challenges and requirements, including monolingual retrieval, cross-lingual retrieval,and multilingual retrieval. Monolingual retrieval refers to the task of retrieving documents in thesame language as the user’s query, focusing on developing effective ranking algorithms and relevancematching techniques. Cross-lingual retrieval involves queries and documents in different languages,requiring the system to bridge the language gap by employing techniques such as query translation,document translation, or cross-lingual representation learning. Multilingual retrieval requires thecreation of a single ranked list of documents in multiple languages for a given query, addressingchallenges such as language disparity, varying document lengths, and potential differences in contentquality and relevance across languages while providing users with a unified and coherent ranked listof results. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 4 | 
	1 INTRODUCTION | 
	Recent approaches to multilingual information retrieval have leveraged multilingual pre-trainedlanguage models such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) to encodequeries and documents (Karpukhin et al., 2020). While these models can transfer relevance matchingcapabilities across languages, their performance tends to underperform on cross-lingual retrievalbenchmarks due to the lack of explicit alignment between languages during pretraining (Zhang et al.,2023). LaREQA, introduced by (Roy et al., 2020), targets strong alignment, requiring semanticallyrelated pairs across languages to be closer in representation space than unrelated pairs within the samelanguage. (Roy et al., 2020) found that augmenting the training data through machine translationproved effective in achieving robust alignment for MLIR. However, this approach compromisesperformance in monolingual retrieval tasks. Alternative approaches using parallel corpora, such asInfoXLM (Chi et al., 2021) and LaBSE (Feng et al., 2022), have been proposed to align sentences across languages. However, the scarcity of parallel data, especially for low-resource languages,remains a substantial challenge. To address these limitations, (Lawrie et al., 2023) introduced aMultilingual Translate-Train approach using translated datasets, (Hu et al., 2023) proposed contrastivelosses to align representations and remove language-specific information, (Huang et al., 2023a)presented a knowledge distillation framework for multilingual dense retrieval, and (Lin et al., 2023a)extended Aggretriever (Lin et al., 2023b) for multilingual retrieval using semantic and lexical features.While the methods proposed in (Hu et al., 2023) and (Huang et al., 2023a) attempt to mitigatelanguage bias, we raise the question: Is there a straightforward approach that addresses this issue bymodifying the training data batches without necessitating the introduction of loss functions or newarchitectural components? In this paper, we propose a novel hybrid batch training strategy that simultaneously optimizes retrievalperformance across monolingual, cross-lingual, and multilingual settings while also mitigatinglanguage bias. Our approach fine-tunes multilingual language models using a balanced mix ofmonolingual and cross-lingual question-answer pair batches. We collect a diverse set of Englishquestion-answer datasets and use machine translation to generate parallel question-answer pairsacross several languages, including low-resource languages where parallel corpora may be limited(Fan et al., 2021; Kim et al., 2021; Costa-juss`a et al., 2022). Our hybrid batch training approachsignificantly reduces the language bias that hinders the performance of multilingual retrieval systemsby training the models on a diverse set of language pairs and encouraging the learning of languageagnostic representations. This mitigates the tendency of models to favor certain languages overothers, ensuring that documents from multiple languages are fairly ranked based on their relevanceto the query, regardless of the language. Extensive experiments on XQuAD-R, MLQA-R, andMIRACL benchmark datasets demonstrate the effectiveness of our proposed approach, with modelstrained using the hybrid batch strategy consistently achieving competitive results in zero-shot retrievalacross various languages and retrieval tasks, outperforming models trained with only monolingual orcross-lingual data. Our approach also exhibits strong zero-shot generalization to unseen languagesnot included in the training data, highlighting its potential to expand the linguistic coverage ofmultilingual information retrieval systems. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 5 | 
	2.1 CONTRASTIVE LEARNING | 
	Throughout the paper, we utilize the dual-encoder architecture with shared parameters, which iscommonly used for dense retrieval (DR; Ni et al., 2022). Contrastive learning is a method for trainingDR models by contrasting positive pairs against negatives. Specifically, given a batch of triplets, eachof which consists of a query and its relevant and irrelevant documents: (qn, d+n ); 1 ≤ n ≤ |B|.We minimize the InfoNCE loss for each query qn: n , d− L = |B|(cid:88) i=1 − log esθ(qi,d+i )|B|(cid:80)j=1 i ) + esθ(qi,d+ esθ(qi,d−j ) . (1) (a) Proposed hybrid batching We use cosine similarity as the scoring function: sθ(q, d) = cos (Eθ(q), Eθ(d)), where Eθ is theencoder parametrized by θ. Following Wang et al. (2022), we incorporate prefix identifiers “Query:”and “Passage:” for queries and passages, respectively. As shown in prior work (Hofst¨atter et al.,2021; Lin et al., 2021), in-batch negatives mining, the second term of the denominator in Eq (1), playsa crucial role in dense retrieval training. In this work, we study different batch sampling approachesto control in-batch negative mining. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 6 | 
	2.2 BATCH SAMPLING | 
	Baseline Batch Sampling. We study the following training batching procedures introduced by(Roy et al., 2020). (i) Monolingual batching (coined as X-X-mono model) creates each batch withmono language, where all the triplets consist of queries and passages in the same language. Notethat we sample the language used to create the batch equally among all possible languages in ourtraining data. (ii) Cross-lingual batching (coined as X-Y model) creates each batch, where all thetriplets consist of queries and passages in different languages. Monolingual batching only focuseson contrastive learning for query-passage pairs in the same languages while cross-lingual batchingmines positives and in-batch negatives from diverse languages. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 7 | 
	2.2 BATCH SAMPLING | 
	As shown in (Roy et al., 2020), the X-Y model is more effective in cross-lingual retrieval scenariosand shows reduced language bias; however, the X-X-mono surpasses the X-Y model in monolingualretrieval. These results inspire us to explore whether simply combining the two batch samplingapproaches can achieve improvement in both monolingual and cross-lingual retrieval effectiveness. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 8 | 
	2.2 BATCH SAMPLING | 
	Figure 1: Illustrative example of monolingual, cross-lingual, and multilingual information retrieval. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 9 | 
	2.2 BATCH SAMPLING | 
	Figure 2: Illustrations of the proposed hybrid batch sampling (assuming we only have training datain English, Arabic, and Japanese), where our model is exposed to monolingual and cross-lingualbatches with the respective probability of α and β = 1 − α. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 10 | 
	2.2 BATCH SAMPLING | 
	Hybrid Batch Sampling.In this work, we propose to combine the two aforementioned baselinesampling strategies. Specifically, when creating batch training data, we set α and β = 1 − α as therespective probability of using monolingual and cross-lingual batching as shown in Fig. 2.1 | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 11 | 
	2.2 BATCH SAMPLING | 
	1In the experiments, we found out that setting the hyperparameters α and β to 0.5 resulted in the best balance | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 12 | 
	2.2 BATCH SAMPLING | 
	between the performance of the proposed model on monolingual and multilingual evaluations. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 13 | 
	3 EXPERIMENTAL SETUP | 
	This section presents the experimental setup for evaluating the proposed hybrid batch training strategy.We first discuss the training process, including datasets, and multilingual pre-trained models. Next,we introduce the evaluation datasets and metrics used to assess the performance of the fine-tunedmodels. Finally, we describe the evaluation settings for monolingual, cross-lingual, and multilingualretrieval tasks. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 26 | 
	4.1 SUMMARY OF MAIN RESULTS | 
	In particular, Tables 3 through 6 showcase the MAP and Recall scores for zero-shot monolingual,cross-lingual, and multilingual retrieval tasks on the XQuAD-R and MLQA-R datasets, consideringboth fine-tuned XLM-R and LaBSE models. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 37 | 
	4.2 | 
	Table 4: Performance comparison of MAP and Recall scores across zero-shot monolingual, crosslingual, and multilingual retrieval tasks on the MLQA-R dataset for a fine-tuned XLM-R model anddifferent training batch types. The best result is highlighted in bold, and the second-best result isunderlined. | 
| 
	ICLR.cc/2025/Conference | 
	vVlNBaiLdN | 1 | 
	Title | 
	002003004005006007 | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 14 | 
	3.1 TRAINING | 
	Datasets. To conduct the study of batch sampling, parallel query-passage training pairs are requiredsuch that we can construct cross-lingual triplets, where each query and its relevant (or irrelevant)passage are in different languages. mMARCO (Bonifacio et al., 2021) is the only dataset with parallelqueries and passages across 14 languages. In our study, we further scale the size of training data bytranslating the existing question-answering datasets. Specifically, we developed our in-house machinetranslation pipeline to create parallel QA pairs for the monolingual datasets across nine languages:Arabic, Chinese, English, German, Hindi, Russian, Spanish, Thai, and Turkish. The additionaltraining data used in our study include DuoRC (Saha et al., 2018), EntityQuestions (Sciavolino et al.,2021), Google NQ (Kwiatkowski et al., 2019), MFAQ (De Bruyn et al., 2021), Mr. Tydi (Zhang et al.,2021), NewsQA (Trischler et al., 2017), WikiQA (Yang et al., 2015), and Yahoo QA mined fromYahoo Answers. Appendix A.1 provides comprehensive details about the training datasets. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 15 | 
	3.1 TRAINING | 
	Training Setup. We apply the baseline and our proposed hybrid batching to fine-tune two representative multilingual pre-trained models: (i) XLM-RoBERTa (XLM-R) (Conneau et al., 2020);and (ii) language-agnostic BERT sentence embedding (LaBSE) (Feng et al., 2022). Model trainingexperiments were conducted using one NVIDIA A100-80 GB GPU. We fine-tune pre-trained modelsusing AdamW optimizer (Loshchilov & Hutter, 2018) with weight decay set to 1e-2, a learning rateof 3e-5, and a batch size of 100. We apply the early stopping (Prechelt, 1998) to select the modelcheckpoint with the lowest validation loss on SQuADShifts dataset (Miller et al., 2020). Note thatthe validation set used for checkpoint selection consists solely of English examples. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 16 | 
	3.1 TRAINING | 
	Hyperparameter Tuning for Hybrid Batch Sampling. To determine the optimal values for thehyperparameters α and β in our hybrid batch sampling approach, we conducted a comprehensive gridsearch. We evaluated α values ranging from 0 to 1, with β always set to 1 − α. Each configurationwas tested on a held-out validation set comprising a diverse selection of languages. We assessedthe model’s performance across monolingual, cross-lingual, and multilingual retrieval tasks. Ourgoal was to find a balance that would optimize performance across all three retrieval settings withoutsignificantly sacrificing any particular one. We found that setting α = 0.5 provided the best overallresults, striking an effective balance between monolingual and cross-lingual/multilingual performance.This equal weighting between monolingual and cross-lingual batches allowed our model to maintainstrong monolingual retrieval capabilities while also excelling in cross-lingual and multilingualscenarios. We also observed that the model’s performance was relatively stable for α values between0.4 and 0.6, indicating some robustness to small variations in these hyperparameters. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 17 | 
	3.2 EVALUATION | 
	Datasets. We evaluate the retrieval effectiveness of different models on three distinct datasets:XQuAD-R (Roy et al., 2020) and MLQA-R (Roy et al., 2020).2 XQuAD-R and MLQA-R are questionanswering datasets with parallel questions and passages in 11 languages and 7 languages, respectively.Thus, these two datasets can be used to evaluate monolingual, cross-lingual, and multilingualretrieval effectiveness. Appendix A.2 provides comprehensive details about the evaluation datasets.Furthermore, we report the detailed monolingual retrieval effectiveness on MIRACL dev (Zhanget al., 2022) in Table 12 and 13 in Appendix A.3.1. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 18 | 
	3.2 EVALUATION | 
	2The evaluation of the models is conducted on datasets that are completely separate and distinct from theones used for training. More specifically, the models have not encountered any data samples, whether fromthe training or testing splits, of the evaluation datasets during their training process. This ensures an unbiasedassessment of the ability of the models to generalize and perform effectively on unseen data. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 19 | 
	3.2 EVALUATION | 
	XQuAD-R (↑) | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 20 | 
	3.2 EVALUATION | 
	Table 1: Main experiments on XQuAD-R and MLQA-R. mAP (marco averaged across all languages)numbers are reported. Mo., CR., and Mul. denote monolingual, cross-lingual, and multilingualretrieval settings. respectively. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 21 | 
	3.2 EVALUATION | 
	MLQA-R (↑) XLM-R LaBSE Sampling Mo..792.755.798.808.801.817 X-XX-YHybridX-XX-YHybrid language bias (↓) Model XLM-R LaBSE Sampling XQuAD-R MLQA-R410295287262225221 X-XX-YHybridX-XX-YHybrid Metrics and Settings. We report the mean average precision (mAP) for XQuAD-R and MLQA-Rsince the metric considers the retrieval quality when multiple relevant passages for a given queryexist.3 We conduct retrieval using the queries with XQ language against the corpus with XC languageand report the macro-averaged mAP over all the cross-lingual (denoting Cr.) combinations languagepairs (XQ ̸= XC), and the other monolingual (denoting Mo.) combinations (XQ = XC). Forexample, in XQuAD-R (MLQA-R), we have 11 and 7 parallel languages; thus, there are 110 (42) and11 (7) cross-lingual and monolingual retrieval settings, respectively. For multilingual (denoting Mul.)retrieval, we conduct retrieval using the queries with XQ language against all the parallel corpus indifferent languages. We report the detailed results for specific languages in Section 4.2. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 22 | 
	3.2 EVALUATION | 
	Model | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 23 | 
	4.1 SUMMARY OF MAIN RESULTS | 
	Zero-shot Retrieval Evaluation. We report the effectiveness of different batch sampling strategiesin Table 1. We observe that X-X and X-Y sampling only perform well in monolingual and crosslingual retrieval settings, respectively. These results indicate that optimization for either monolingualor cross-lingual retrieval alone may come at the expense of the other. Our hybrid batch sampling,on the other hand, optimizes both retrieval settings. As a result, our hybrid batch sampling achievesthe best performance in multilingual retrieval settings, where the ability of the models to handleboth monolingual and cross-lingual retrieval tasks is evaluated.4 Finally, the same conclusion holdswhen using XLM-R and LaBSE as initialization that hybrid batch sampling is better than the othertwo baseline batch sampling approaches. A thorough analysis of the retrieval performance acrossvarious training batch types, retrieval tasks, languages, and datasets is presented in Section 4.2.1. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 24 | 
	4.1 SUMMARY OF MAIN RESULTS | 
	3The results for the Recall metric are in Section 4.2.1.4The performance of the models is evaluated on certain languages, such as Greek (el) and Vietnamese (vi),which were not included in the training data. This aspect of the evaluation process aims to assess the ability ofthe models to handle languages they have not been explicitly trained on, providing insights into their zero-shotcross-lingual transfer capabilities (See Section 4.2.1). | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 25 | 
	4.1 SUMMARY OF MAIN RESULTS | 
	Table 2: Language bias in multilingual retrieval. | 
| 
	ICLR.cc/2025/Conference | 
	9DrPvYCETp | 22 | 
	3 SHARED RECURRENT MEMORY TRANSFORMER | 
	R(s, u, a(U )) : S × U × An → R, O(s, a) : S × A → O. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 27 | 
	4.1 SUMMARY OF MAIN RESULTS | 
	Language Bias Evaluation. To gain insight into why hybrid batch sampling achieves strongperformance in multilingual retrieval settings, we investigate the language bias exhibited by modelsfine-tuned using different batch sampling strategies. Following Huang et al. (2023b), we measure thelanguage bias using the maximum rank distance among all the parallel corpus. That is, for each query,we calculate the difference between the highest and lowest rank of the relevant passages.5 We reportthe macro averaged rank distance across all languages in Table 2 and present the comprehensiveresults in Section 4.2.2. Specifically, Table 7 shows the rank distances for the XQuAD-R dataset,while Table 8 displays the rank distances for the MLQA-R dataset, both considering fine-tunedXLM-R and LaBSE models under different training batch types. As shown in Table 2, modelsfine-tuned with cross-lingual batch sampling show less language bias compared to those fine-tunedwith multi-lingual batch sampling. It is worth noting that our hybrid batch sampling, combiningboth baseline sampling, still maintains low language bias without sacrificing monolingual retrievaleffectiveness. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 28 | 
	4.2 | 
	IN-DEPTH ANALYSIS 4.2.1 ZERO-SHOT RETRIEVAL EVALUATION ON XQUAD-R AND MLQA-R We present the experimental results of our proposed hybrid batching approach for improving theretrieval performance of fine-tuned multilingual language models across various tasks and datasets.We compare our method with two baseline training batch methods (X-X-mono and X-Y) usingtwo pre-trained multilingual language models (XLM-R and LaBSE) on two evaluation datasets(XQuAD-R and MLQA-R). The performance is measured using Mean Average Precision (MAP)and Recall @ 1 (R@1) and Recall @ 10 (R@10) metrics across monolingual, cross-lingual, andmultilingual retrieval settings. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 29 | 
	4.2 | 
	Consistent improvement across languages and tasks: Tables 3 through 6 demonstrate the performance of the proposed hybrid batching approach when applied to the XLM-R and LaBSE models onthe XQuAD-R and MLQA-R datasets. Our method consistently achieves the highest mean MAP andmean R@1 scores across monolingual and cross-lingual settings for all combinations of datasets andmodels. Furthermore, our proposed method consistently achieves either the highest mean MAP andmean R@10 scores in the multilingual retrieval setting or performs comparably to the X-Y batchingmethod, which is specifically optimized for multilingual retrieval. Notably, there is a substantialperformance gap between the second-best approach (either our method or X-Y) and the third-bestapproach (X-X-mono) in terms of these evaluation metrics for multilingual retrieval. This demonstrates the robustness and effectiveness of the proposed method in improving retrieval performance,regardless of the language or task complexity. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 30 | 
	4.2 | 
	Balanced performance across evaluation metrics: The proposed approach strikes a balance between the X-X-mono (optimized for monolingual retrieval setting) and X-Y (crosslingual/multilingual retrieval settings) baselines. This compromise is evident when analyzing theperformance of individual languages across different retrieval tasks. In the monolingual retrievalsetting, the proposed method tends to outperform or maintain comparable performance to the X-Xmono baseline for most languages. Similarly, the proposed approach generally surpasses the X-Ybaseline across most languages in the cross-lingual and multilingual retrieval settings. A key insightis that in cases where our approach does not achieve the top performance for a specific language andretrieval setting, it consistently performs as a strong runner-up to the approach specifically optimizedfor that retrieval setting. Simultaneously, our method maintains a significant advantage over thethird-best approach in such cases. This trend is consistent for XLM-R and LaBSE models on theXQuAD-R and MLQA-R datasets. By effectively finding a middle ground between the strengths ofthe X-X-mono and X-Y baselines, the proposed method offers a versatile solution that can handlemonolingual, cross-lingual, and multilingual retrieval tasks across a wide range of languages withoutsignificantly compromising performance in any particular setting. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 31 | 
	4.2 | 
	5Note that in XQuAD-R and MLQA-R, each query only has one relevant passage in each language. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 32 | 
	4.2 | 
	Evaluation of Fine-tuned XLM-R Model on XQuAD-R Dataset | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 33 | 
	4.2 | 
	Table 3: Performance comparison of MAP and Recall scores across zero-shot monolingual, crosslingual, and multilingual retrieval tasks on the XQuAD-R dataset for a fine-tuned XLM-R model anddifferent training batch types. The best result is highlighted in bold, and the second-best result isunderlined. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 34 | 
	4.2 | 
	Monolingual Cross-lingual Multilingual Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeleneshiruthtrvizh Mean R@1 Monolingual Cross-lingual R@10 Multilingual Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeleneshiruthtrvizh Mean Evaluation of Fine-tuned XLM-R Model on MLQA-R Dataset Monolingual MAP Cross-lingual Multilingual Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeneshivizh Mean R@1 R@10 Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed Monolingual Cross-lingual Multilingual ardeeneshivizh Mean Evaluation of Fine-tuned LaBSE Model on XQuAD-R Dataset Monolingual MAP Cross-lingual Multilingual Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeleneshiruthtrvizh Mean R@1 Monolingual Cross-lingual R@10 Multilingual Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeleneshiruthtrvizh Mean Evaluation of Fine-tuned LaBSE Model on MLQA-R Dataset Monolingual MAP Cross-lingual Multilingual Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeneshivizh Mean R@1 R@10 Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed X-X-mono X-Y Proposed Monolingual Cross-lingual Multilingual ardeeneshivizh Mean Zero-shot Generalization to unseen languages. The proposed approach exhibits remarkable zeroshot generalizability, as evidenced by its strong performance across different multilingual pre-trainedmodels and evaluation datasets in Greek (el) and Vietnamese (vi) languages, which were not includedin the training data used to develop the model. For example, in Table 5, which presents resultsfor the LaBSE model on the XQuAD-R dataset, the proposed method achieves the best MAP andRecall@1 scores for Vietnamese, a low-resource language, in both monolingual and cross-lingualretrieval settings, outperforming the X-X-mono and X-Y approaches. In the multilingual retrievalsetting, the proposed approach achieves MAP and R@10 scores of 0.6809 and 0.5964, respectively.These scores are very close to the 0.6828 and 0.5979 achieved by the X-Y model, which is primarilyoptimized for multilingual retrieval. Additionally, the proposed method significantly outperforms theX-X-mono approach, which is mainly optimized for monolingual retrieval and achieves scores of0.6506 and 0.5661. 4.2.2 LANGUAGE BIAS EVALUATION Significant mitigation of language bias Compared to monolingual batching. The proposedapproach substantially reduces language bias compared to the X-X-mono baseline. In Table 1, theproposed method achieves a mean rank distance of 286.6 using XLM-R, compared to 410.2 forX-X-mono, representing a 30.1% reduction in language bias. Similarly, for LaBSE, the proposedapproach reduces the mean rank distance by 15.4% (from 261.5 to 221.1). In Table 2 (MLQA-R),the proposed method achieves a mean rank distance of 227.1 using XLM-R, compared to 287.5for X-X-mono, resulting in a 21% reduction in language bias. For LaBSE, the proposed approachreduces the mean rank distance by 13.4% (from 225.3 to 195). These significant reductions highlightthe effectiveness of the proposed method in mitigating language bias of the retrieval system. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 35 | 
	4.2 | 
	MAP | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 36 | 
	4.2 | 
	Competitive reduction in average rank distance compared to cross-lingual batching. Theproposed approach exhibits competitive performance in reducing the average rank distance comparedto the strong X-Y baseline. In Table 7 (XQuAD-R), the proposed method achieves the best mean rankdistance of 286.6 using XLM-R, outperforming both X-X-mono (295.4) and X-Y (295.4) baselines.For LaBSE, the proposed approach obtains a mean rank distance of 221.1, which is better thanthe X-Y baseline (225.2). In Table 8 (MLQA-R), the proposed method achieves a slightly highermean rank distance than the X-Y baseline for XLM-R (227.1 vs. 226.7), but outperforms the X-Ybaseline for LaBSE (195 vs. 198.3). These results demonstrate that the proposed approach is highlycompetitive in reducing the average rank distance and can even outperform the strong X-Y baselinein certain cases. This reduction in average rank distance directly translates to a decrease in languagebias, as the proposed method effectively brings relevant documents closer together in the retrievalresults, regardless of the language. | 
| 
	ICLR.cc/2025/Conference | 
	gtVo4xcpFI | 31 | 
	3.3 BENCHMARK DATASET CONSTRUCTION | 
	Amount Description | 
| 
	ICLR.cc/2025/Conference | 
	gtVo4xcpFI | 32 | 
	57 60 | 
	Focus on evaluating the grasp of the LLM on fundamental hardware concepts and principles. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 38 | 
	4.2 | 
	Table 5: Performance comparison of MAP and Recall scores across zero-shot monolingual, crosslingual, and multilingual retrieval tasks on the XQuAD-R dataset for a fine-tuned LaBSE model anddifferent training batch types. The best result is highlighted in bold, and the second-best result isunderlined. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 39 | 
	4.2 | 
	Table 6: Performance comparison of MAP and Recall scores across zero-shot monolingual, crosslingual, and multilingual retrieval tasks on the MLQA-R dataset for a fine-tuned LaBSE model anddifferent training batch types. The best result is highlighted in bold, and the second-best result isunderlined. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 40 | 
	4.2 | 
	Tables 7 and 8 present a comprehensive comparison of the average rank distance metric6 (Huang et al.,2023a) across different multilingual retrieval tasks using fine-tuned XLM-R and LaBSE models. Theproposed approach is evaluated against two baseline methods: X-X-mono and X-Y, on two datasets:XQuAD-R (Table 7) and MLQA-R (Table 8). The lower the average rank distance, the better theperformance. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 41 | 
	5 CONCLUSION | 
	Developing IR models that can handle queries and documents across many languages is increasinglycritical. In this work, we introduced a hybrid batch training strategy to optimize IR systems formonolingual, cross-lingual, and multilingual performance simultaneously. By fine-tuning multilinguallanguage models on a mix of monolingual and cross-lingual question-answer pairs, the modelslearn robust representations that generalize well across languages and retrieval settings. Extensiveexperiments demonstrate that this simple yet effective approach consistently matches or outperformsmodels trained with only monolingual or cross-lingual data, and substantially mitigates the languagebias that hinders multilingual retrieval performance. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 42 | 
	5 CONCLUSION | 
	6Rank distance is the average, over all queries and their relevant documents, of the difference between themaximum and minimum ranks assigned by an MLIR model to parallel (semantically similar) relevant documentsacross different languages. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 43 | 
	6 LIMITATIONS | 
	This work focuses on optimizing retrieval performance but does not address issues related to resultdiversity, fairness, or transparency in multilingual settings. For example, it may reflect societalbiases present in the training data. Addressing these concerns is important for building equitablemultilingual retrieval systems. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 44 | 
	6 LIMITATIONS | 
	Furthermore, the experiments focus only on the XQuAD-R, MLQA-R, and MIRACL benchmarkdatasets. While these cover a range of languages, they may not be fully representative of real-worldmultilingual information retrieval needs. The robustness of the results to other domains, questiontypes, and retrieval scenarios is an exciting future direction. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 45 | 
	6 LIMITATIONS | 
	Average Rank Distance over XQuAD-R Dataset | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 46 | 
	6 LIMITATIONS | 
	Table 7: Comparison of the rank distances among relevant documents of the XQuAD-R dataset acrossrank lists generated by fine-tuned XLM-R and LaBSE models for zero-shot multilingual retrievaltasks under different training batch types. The best result is highlighted in bold, and the second-bestresult is underlined. | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 47 | 
	6 LIMITATIONS | 
	XLM-R Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeleneshiruthtrvizh Mean Average Rank Distance over MLQA-R Dataset XLM-R LaBSE Source Language X-X-mono X-Y Proposed X-X-mono X-Y Proposed ardeeneshivizh Mean | 
| 
	ICLR.cc/2025/Conference | 
	zkNCWtw2fd | 48 | 
	6 LIMITATIONS | 
	LaBSE | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 1 | 
	Title | 
	EXECUTION-EVAL: CAN LANGUAGE MODELS EXECUTE REAL-WORLD CODE? | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 2 | 
	Abstract | 
	As language models advance, traditional benchmarks face challenges of datasetsaturation and disconnection from real-world performance, limiting our understanding of true model capabilities. We introduce EXecution-Eval (EXE), abenchmark designed to assess LLMs’ ability to execute code and predict programstates. EXE attempts to address key limitations in existing evaluations: difficultyscaling, task diversity, training data contamination, and cost-effective scalability.Comprising over 30,000 tasks derived from 1,000 popular Python repositories onGitHub, EXE spans a wide range of lengths and algorithmic complexities. Tasksrequire models to execute code, necessitating various operations including mathematical reasoning, logical inference, bit manipulation, string operations, loopexecution, and maintaining multiple internal variable states during computation.Our methodology involves: (a) selecting and preprocessing GitHub repositories,(b) generating diverse inputs for functions, (c) executing code to obtain groundtruth outputs, and (d) formulating tasks that require models to reason about codeexecution. This approach allows for continuous new task generation for as fewas 1,123 tokens, significantly reducing the risk of models ”training on the testset.” We evaluate several state-of-the-art LLMs on EXE, revealing insights intotheir code comprehension and execution capabilities. Our results show that eventhe best-performing models struggle with complex, multi-step execution tasks,highlighting specific computational concepts that pose the greatest challenges fortoday’s LLMs. Furthermore, we review EXE’s potential for finding and predictingerrors to aid in assessing a model’s cybersecurity capabilities. We propose EXEas a sustainable and challenging testbed for evaluating frontier models, offeringinsights into their internal mechanistic advancement. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 3 | 
	1 INTRODUCTION | 
	Language model benchmarks are facing challenges of rapid saturation (Ott et al., 2022) and anincreasing disconnect from real-world performance perceived by end-users (Zheng et al., 2023).Due to this, benchmarks are being continually created to address failure modes; e.g. SuperGLUEtargeting GLUE’s low problem difficulty (Wang et al., 2019), BIG-bench targeting general low evaluation diversity (Srivastava et al., 2022) and Auto-Arena-Hard targeting training-set contaminationand data diversity in Chatbot-Arena (Li et al., 2024)(Chiang et al., 2024). These failure modesall demonstrate the challenge in linking the mechanistic improvements within language models tohuman understandable tasks. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 4 | 
	1 INTRODUCTION | 
	Hence, to maximise an evaluation’s utility we aim to minimise the common failure modes of; a)difficulty, not ensuring an unbound scale of small trivial problems to complex multi-step problems,b) diversity, not ensuring a representative distribution across a large space of problems, c) novelty,not ensuring continually fresh, out-out-training data samples can be generated and, d) scalability,not ensuring tasks are cost-effective to generate in the thousands and beyond. | 
| 
	ICLR.cc/2025/Conference | 
	PwxYoMvmvy | 49 | 
	5 Conclusions | 
	Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. On the equivalence between graphisomorphism testing and function approximation with gnns. Advances in neural informationprocessing systems, 32, 2019. | 
| 
	ICLR.cc/2025/Conference | 
	gtVo4xcpFI | 33 | 
	57 60 | 
	Apply concepts to new and complex scenarios for generalization. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 5 | 
	1 INTRODUCTION | 
	Motivated by these challenges we introduce EXecutionEval (EXE), an evaluation replicating oneof the primary tasks humans perform while coding; predicting and comparing a final program statefor a given set of inputs - seen in Figure 1. EXE is designed to avoid the aforementioned failuremodes; emphasising difficulty (smooth scale from trivial 1-step, one-line functions to difficult 100sof-step, multi-layer functions), diversity (unbound number of test cases generatable for tasks from 1,000 GitHub Repos), novelty (program inputs can be continually generated) and scalability (initialrelease containing 30,000+ problems at a cost of $33). | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 6 | 
	1 INTRODUCTION | 
	EXE also holds theoretical inspiration. (Fowler et al., 2022) et al have replicated positive pedagogical correlations found by (Lopez et al., 2008) between the abilities of CS1 students to ”trace”programs (i.e. manually predict outputs and write the internal state out line by line) and their abilities to pass code writing and explanation exams. This is mirrored in CRUX-Eval’s (Gu et al., 2024)findings, where they observe a moderate correlation between a model’s ability to execute a block ofcode and a model’s HumanEval (Chen et al., 2021) code writing Pass@1 rate. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 7 | 
	1 INTRODUCTION | 
	Figure 1: An example task from Apache Airflow’s Github repository (code simplified to fit withindiagram). EXE sources tasks from 1,000 Python repositories, generates test cases for them, andcompares the LLM’s ability to execute code against python’s interpreter. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 8 | 
	2 EVALUATION FRAMEWORK | 
	As seen in Figure 1, an EXE task is to predict a function’s return value or error from: a) a codesnippet and b) a set of input arguments. Code snippets are extracted from PyPi’s most popular 1,000python projects hosted on GitHub, we select our snippets to be pure (i.e. deterministic, no sideeffects), language model generatable (i.e. arg types of ints, lists, ...) and to only requirebuiltins (local imports and external libraries are inlined for the snippet). To realise this we followthe following three stage pipeline 2: 1. Repo Selection and Code Scraping. We first select the top 1,000 most popular pypi packagesand collate the corresponding github repos where possible, similar to (Jimenez et al., 2023). Repositories are filtered to include only those with permissive licences that allow derivative works withattribution. These repos are then pulled down locally and filtered based on a static Abstract SyntaxTree (AST) analysis determining which repositories contain type-annotated code. 2. Function Selection and Dependency Collation. We perform a static AST analysis to filterto functions with LLM generatable argument and return type annotations. Further AST analysis then recursively identifies dependent elements (modules, functions, classes, variables, ...) acrossfiles, builds a dependency graph, and inlines them into a base task. Finally, base tasks containingside effects or non-deterministic code such as environment variables, process calls, randomness ornetwork requests are filtered out. See Appendix A.3 for step-by-step methodology and A.5 for detailon acceptable type annotations and filtering. 3. Test Case Generation. Using the argument type annotations we construct a LLM functioncalling schema that generates a diverse set of inputs. The base task code is then executed with eachgenerated input and the result with runtime statistics are logged. This forms the test case (base taskcode + generated input), output (returned result or error from executed code) and statistics (runtimestatistics + static AST analysis statistics). See Appendix A.2 for step-by-step methodology andAppendix A.6 for details on statistics. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 9 | 
	2 EVALUATION FRAMEWORK | 
	Through these stages of filtering, the original top 1,000 repositories are filtered down to the 33,875task instances which comprise EXE. A high level breakdown of these task instances across repositories is presented in Figure 3. We note some repositories are overrepresented primarily due to beingmore modern (using type annotations) and the style of code (shorter deterministic pieces). | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 10 | 
	2 EVALUATION FRAMEWORK | 
	Figure 2: Three stage EXE task generation pipeline. Detailed example tasks and generated inputscan be found in Appendix A.1. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 11 | 
	2 EVALUATION FRAMEWORK | 
	Figure 3: We observe task counts per repository to have a near logarithmic falloff. Note: Basedon manual observations, several repositories are removed from EXE due to thousands of similarfunctions with only single modifications, for example changing a url address. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 12 | 
	2.1 TASK FORMATION | 
	Model input. The model is given a complete snippet of code alongside the input state to be executed.The model is then tasked to predict the resulting return value, or in the case that an exception is raisedthe model is instructed to generate an exception type and value. In practice, we prompt modelswith an odata json representation and use a parser to ensure valid generations. We do append oneadditional user reply with the parsing error if the model’s response fails to parse. Examples of inputinstances can be found in Appendix A.1. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 13 | 
	2.1 TASK FORMATION | 
	Evaluation metrics. To evaluate a proposed solution, we use the pass@k metric (Chen et al., 2021),comparing the ground truth and the generated prediction as json objects (set and frozensetare sorted before conversion to json lists). If the original code produced an exception, we comparethe type and message (excluding stacktrace) using a language model comparison. See detailedmethodology in Appendix A.7 and see examples of generated outputs in Appendix A.1. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 14 | 
	2.2 FEATURES OF EXE | 
	Diversity of inputs and outputs. Unlike many benchmarks focused on a particular subject matterarea, a task in this eval may require a model to perform mathematical reasoning, logical inference,bit manipulation, string operations, loop execution, or to maintain multiple internal variables duringcomputation. Furthermore, these may only form part of an algorithm that the model has to execute. Our random human inspection has uncovered algorithmic time complexities spanning fromO(1) to O(xn) and structured analysis has found tasks with code context lengths ranging from 440to 311,000 tokens. Ensuring this broad diversity reduces the risk of hitting a local maxima andincreases our opportunity to measure internal capabilities across a range of difficulties. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 15 | 
	2.2 FEATURES OF EXE | 
	Continually updatable. Both our code collection and task input generation processes can createnew tasks with minimal human oversight. Simply re-running our code collection to pull the latest commits or directing it towards an uncollected Python GitHub repository will create new task instances. Furthermore we can continue to generate new test cases for existing tasks, our test casegenerator automatically avoids generating seen inputs. Hence, EXE can be extended continuallywith new task instances, ensuring answers were not included in training corpuses of models forevaluation. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 16 | 
	2.2 FEATURES OF EXE | 
	Cost effective scalability. With generation of new tasks requiring an average of 1,112 input tokens(batch of 15) and evaluation of tasks typically requiring 1,123 tokens, ExecEval can be generated,tested and continually updated at a fraction of the cost of human-curated benchmarks. Our initialdataset of 33,875 cases has only incurred an approximate costing of $33 to produce and $95 to teston. | 
| 
	ICLR.cc/2025/Conference | 
	PwxYoMvmvy | 50 | 
	5 Conclusions | 
	Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. In International Conference on Learning Representations, 2020. | 
| 
	ICLR.cc/2025/Conference | 
	PwxYoMvmvy | 51 | 
	5 Conclusions | 
	Weilin Cong, Morteza Ramezani, and Mehrdad Mahdavi. On provable benefits of depth in traininggraph convolutional networks. Advances in Neural Information Processing Systems, 34:9936–9949, 2021. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 17 | 
	2.2 FEATURES OF EXE | 
	Long multi-step problems with smooth difficulty scaling. We provide a continuous spectrumof task difficulties, ranging from 1-step, one-line functions to multi-file, multi-class, multi-100step tasks. Our most complex tasks include function call depths (non-recursive) of up to 13 levels(median: 2), separate identifier counts (i.e. variable names, function names, . . . ) of up to 823(median: 16) and up to 63 if statements (median: 1). This smooth scaling of difficulty allows fora more detailed measurement of model coherence along multi-step problems than what is typicallyseen in traditional evaluations. However, as language models continue to advance rapidly, even thiswide range of difficulties may eventually face saturation. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 18 | 
	2.2 FEATURES OF EXE | 
	To address this, we observe a mechanism inspired by the SKILL-MIX evaluation (Yu et al., 2023)that leverages the typed nature of our function selection process. This approach allows us to create even more complex tasks by chaining functions where the output type of one matches the inputtype of another, or by combining multiple outputs into a composite input. The number of potential new tasks can be upper bounded by n2 · (Tmax)k · C,, where n is the total number of types,Tmax = maxi,j Ti,j is the maximum number of existing tasks between any two types, k is the number of functions to chain, and C is the average number of test cases per task. While this is an upperbound and the actual number of valid composite tasks would be lower due to specific type compatibility constraints, it still represents a significant expansion of our task space. We view this as anopportunity to trade some of the ’realism’ of using 100% real-world code for the ability to probe theupper bounds of model capabilities. For constant compute models, this approach allows us to testtheir internal mechanistic capabilities in handling increasingly complex, multi-step problems. Andfor chain-of-thought models, it provides a test of increasingly long-term agentic coherency. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 19 | 
	2.2 FEATURES OF EXE | 
	Error prediction. To test the full spectrum of code execution we further generate test cases designedto trigger exceptions. Many of these require in-depth analysis to see ahead of time, for examplepredicting an invalid array index through multiple functions. While debugging exceptions is oneof the more challenging software engineering tasks, we are yet to see it commonly evaluated inbenchmarks. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 20 | 
	3 RESULTS | 
	We report our evaluation results across different SOTA models alongside our findings across different task statistics below. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 21 | 
	3 RESULTS | 
	Model | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 22 | 
	3 RESULTS | 
	Table 1: EXE Pass@1 results GPT-4oGPT-4o-miniLlama3.1-8BLlama3.1-405BClaude3.5-SonnetMistral-Large-2407 LLMs can execute real-world code, achieving results in-line with code generation benchmarks.We find EXE shows similar relative model performance between models as seen in coding benchmarks such as HumanEval (Chen et al., 2021) and as seen in benchmarks requiring logical inference such as (Lu et al., 2023). Furthermore we find a similar diversity of performance across packages asseen in agentic benchmarks such as (Jimenez et al., 2023). We show our findings in Figure 4. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 23 | 
	3 RESULTS | 
	EXE dataset (Pass@1) Errors (Pass@1) | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 24 | 
	3 RESULTS | 
	Prior works such as Learning To Execute (Zaremba & Sutskever, 2014) and CRUX-Eval (Gu et al.,2024) have placed justifiable limitations on code complexity; removing mathematical operations,limiting line count, disallowing custom classes and only having one singular function to name a few.We hypothesised that these are no longer necessary, and to understand the true internal capabilitiesof a constant compute model (i.e. no Chain of Thought) we must test on real-world code, onlyapplying limitations where forced (i.e. no arbitrary object inputs, as LLMs can’t generate them).Our results as seen in table 1 provide initial evidence towards our hypothesis. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 25 | 
	3 RESULTS | 
	Figure 4: Left - We show the relative accuracy of different models across the top 20 packages by taskcount. Both the relative differences between models and the relative differences between packagesare within expectations from other coding benchmarks (Jimenez et al., 2023). Right - We show themagnitude of diversity across packages (mean performance across all models). | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 26 | 
	3 RESULTS | 
	ExecEval provides a smooth curve of task difficulties. We set out to ensure a) our evaluationdoes not induce saturation from a bounded distribution of task difficulties, b) our evaluation doesnot induce an ”AI overhang” by not having a smooth transition between difficulties and, c) thecorrelated factors affecting difficulty are human interpretable. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 27 | 
	3 RESULTS | 
	As shown in Figure 5 several task statistics such as ”lines of code”, ”processing time” and ”numberof function calls” all correlate log-linearly with a model’s achieved pass@1 score. These correlationsprovide preliminary evidence towards c) as they align with simplistic human intuition, i.e. more linesof code, more compute cycles, higher difficulty. Furthermore, we view the log-linear relationshipsas evidence towards b), i.e. EXE provides a smooth transition between difficulties. And finally, weview the relationships as a demonstration of difficulty being affected by factors within our control,i.e. number of function calls - providing empirical evidence towards a). | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 28 | 
	3 RESULTS | 
	Beyond evaluation-wide difficulty scaling, EXE also demonstrates diversity and varying difficultylevels within individual task sets. Each function has up to 15 generated test cases, allowing us toanalyse variance per task set. To measure execution path diversity, we collect runtime statistics(detailed in Appendix A.6) and find a mean Coefficient of Variation (CV) of 0.61 for ”Count ofconditionals executed”, indicating substantial variation in code paths taken. Furthermore we finda CV of 0.20 for ”lines executed”, showing significant diversity in the number of steps requiredto answer. Finally, we measure diversity in generated task difficulty through model performance GPT-4o achieves a mean pass rate of 0.742 (σ = 0.293) per function, providing empirical evidencetest cases present a difficulty scale. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 41 | 
	4 RELATED WORK | 
	Recent trends in benchmark design have emphasised the importance of diverse, multi-step problemsand agentic capabilities. Works like Jimenez et al. (2023) have introduced benchmarks that requiresolving real world software engineering problems while Zhou et al. (2023) has enabled evaluationof complex instruction following and performing multi-step reasoning. In the mathematical domain,benchmarks like those by Hendrycks et al. (2021) and Lu et al. (2023) have pushed models to solveintricate, multi-step problems. | 
| 
	ICLR.cc/2025/Conference | 
	PwxYoMvmvy | 52 | 
	5 Conclusions | 
	Nima Dehmamy, Albert-L´aszl´o Barab´asi, and Rose Yu. Understanding the representation power ofgraph neural networks in learning graph topology. Advances in Neural Information ProcessingSystems, 32, 2019. | 
| 
	ICLR.cc/2025/Conference | 
	gtVo4xcpFI | 34 | 
	57 60 | 
	Divide the difficulty based on the number of lines of code, type,and design time. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 29 | 
	3 RESULTS | 
	ExecEval’s test case generation scales. While EXE today includes up to 15 test cases per task, ouranalysis demonstrates EXE’s generation pipeline can scale significantly further without plateauing.As shown in Figure 6, generation of novel test case continues well beyond 300 cases per task whilemaintaining all quality controls (detailed in Appendix A.2) - implying a potential dataset scale-uplower bound of 20x. Growth rates vary across specific functions - for example, langchain-core’simage formatting function, which requests a base64 encoded image string, shows the lowest growth Stylistic coding patterns shape the metrics. As can be seen in Figure 5 the pass@1 rate of functioncalls hits an elbow and then surprisingly improves as the call count increases. During our investigation we found several of these occurrences, and not only with call count. These were found to belargely driven by specific coding patterns and complex tasks that LLMs excel at. We show in Figure7 below three example tasks, and more specifically coding patterns driving this anomaly. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 30 | 
	3 RESULTS | 
	rate. This aligns with intuition - generating novel, base64 images poses significantly more difficultythan generating diverse string or numeric inputs. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 31 | 
	3 RESULTS | 
	Figure 5: Pass@1 for all tasks across four of our code metrics. The shaded area represents variance,and the opacity is scaled with count of samples. Processing time is measured in microseconds. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 32 | 
	3 RESULTS | 
	Importantly, our token efficiency analysis (right plot) reveals that significant scaling is possiblewithout proportional prompt growth. By randomly selecting and injecting just 60 prior cases intothe generation prompt, we can effectively generate over 1,000 novel cases. This sublinear tokengrowth suggests the potential for substantial dataset expansion without incurring prohibitive costs.Detailed examples of tasks and their generated test cases are provided in Appendix A.8. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 33 | 
	3 RESULTS | 
	LLMs struggle with certain coding features. As EXE contains a diverse set of tasks, we areable to observe model performance differing greatly based on coding features used in any task.To illustrate: floating point math operations such as multiplications (GPT-4o: 43 mean Pass@1)significantly increase task difficulty, however bit manipulation and boolean operations only showeda minor negative impact. Iterative operations such as compound assignment operations i.e. ”i += 1”(56 Pass@1), list slicing (65 Pass@1) and list comprehensions (68 Pass@1) all increased difficulty,however for loops on (73 Pass@1) on average did not have a significant impact. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 34 | 
	3 RESULTS | 
	Figure 6: Test case generation analysis across eleven diverse Python functions sourced from popular libraries including Azure, PyTorch, Langchain, and NLTK. Functions range from geometriccomputations (torchvision) to SQL regex (snowflake-python-connector). Left: Cumulative uniquevalidated test cases per generation batch. Right: Same data plotted against token usage, showinggeneration cost is largely constant per batch (primary factor is initial task code length). Furthermethodology and source code for tested functions are provided in Appendix A.8. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 35 | 
	3 RESULTS | 
	With the above metrics, and those seen in Figure 7, their mean Pass@k decreases as their countincreases. To reduce the risk of our metrics being a proxy for longer problems we show the effectscan still be seen below in Figure 8 after normalisation by lines of code (only lines with executablesyntax tokens are counted). | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 36 | 
	3 RESULTS | 
	Figure 7: Three examples of high pass@1 rate tasks that contain large amounts of function calls.Left - Charset-normaliser performs 300+ function calls to define ranges of unicode characters uponinitialisation; this constant has little effect on task difficulty but is used frequently and hence appearsin many tasks. Middle - Langchain’s Unparser class traverses an AST and regenerates source code.The calling method in our dataset is ”add last line print(str) → str” which takes in code, parses itand then uses Unparse(...) to unparse it; this is a prime example of a ”directly predictable task”,i.e. one not requiring line by line code execution to predict a result. Right - Similar to Charsetnormaliser, AWS’s Sagemaker has a module level constant with 10s of calls; not creating a largeimpact on task difficulty but frequent in its use. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 37 | 
	4 RELATED WORK | 
	There is a rich history of work on evaluating language models’ abilities in reasoning, execution,and multi-step problem-solving across various domains. These efforts span from natural languageprocessing to mathematical reasoning, and from code generation to program execution. Our work,EXecution-Eval (EXE), builds upon this foundation while addressing key challenges in benchmarkdesign and evaluation. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 38 | 
	4 RELATED WORK | 
	Code generation benchmarks have been the foundation of evaluating the coding abilities of languagemodels. Works like HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021) establishedstandardised datasets for assessing code synthesis from natural language descriptions. These effortshave expanded to cover multiple programming languages (Cassano et al., 2022; Khan et al., 2023)and more complex domains such as algorithmic problem solving (Huang et al., 2023). While thesebenchmarks focus primarily on the task of code generation, we believe additional focus on the tasksof code execution and error prediction have been overlooked and may offer additional insight intothe internal capabilities of frontier models. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 39 | 
	4 RELATED WORK | 
	Figure 8: Pass@1for all tasks across four of our code metrics normalised by line of code count(limited to GPT models for readability). All four of the above metrics previously showed a negativeimpact as they increased, interestingly we now observe branching statements having little to noimpact and return statements surprisingly driving an increase in Pass@1 score. Our strong negativefactors i.e. function calls and identifiers created, still are seen increasing task difficulty as they takeup ever greater percentages of the task. | 
| 
	ICLR.cc/2025/Conference | 
	viQ1bLqKY0 | 40 | 
	4 RELATED WORK | 
	The concept of ”learning to execute” itself has a long history, Zaremba & Sutskever (2014) exploredneural networks’ ability to learn and execute simple programs. Graves et al. (2014) constructed thefirst Neural Turing Machines with (Kaiser & Sutskever, 2015; Reed & de Freitas, 2015; Dehghaniet al., 2018) all building further into this domain. This line of research has evolved, with recentworks like Bieber et al. (2020); Nye et al. (2021) and Gu et al. (2024) applying graph and languagemodels to execute synthetic or simplistic Python programs. EXE builds upon these foundations byevaluating execution capabilities on complex, messy, real-world code from diverse GitHub repositories, providing a more challenging, scaleable and realistic test bed. | 
| 
	ICLR.cc/2025/Conference | 
	PwxYoMvmvy | 53 | 
	5 Conclusions | 
	Chenhui Deng, Zichao Yue, and Zhiru Zhang. Polynormer: Polynomial-expressive graph trans former in linear time. arXiv preprint arXiv:2403.01232, 2024. |