kargaranamir commited on
Commit
38356ef
·
verified ·
1 Parent(s): 72cc3ca

add citation

Browse files
Files changed (1) hide show
  1. app.py +26 -11
app.py CHANGED
@@ -1,8 +1,8 @@
1
  # coding=utf-8
2
- # Copyright 2024 The Mexa Authors.
3
  # Lint as: python3
4
  # This space is built based on uonlp/open_multilingual_llm_leaderboard.
5
- # Mexa Space
6
 
7
  import os
8
  import json
@@ -122,16 +122,16 @@ table th:last-child {
122
  """
123
 
124
 
125
- TITLE = '<h1 align="center" id="space-title">Mexa: Multilingual Evaluation of Open English-Centric LLMs via Cross-Lingual Alignment</h1>'
126
 
127
  INTRO_TEXT = f"""
128
  ## About
129
 
130
- We introduce Mexa, a method for assessing the multilingual capabilities of English-centric large language models (LLMs). Mexa builds on the observation that English-centric LLMs semantically use English as a kind of pivot language in their intermediate layers. Mexa computes the alignment between non-English languages and English using parallel sentences, estimating the transfer of language understanding capabilities from English to other languages through this alignment. This metric can be useful in estimating task performance, provided we know the English performance in the task and the alignment score between languages derived from a parallel dataset.
131
 
132
  ## Code
133
 
134
- https://github.com/cisnlp/Mexa
135
 
136
  ## Details
137
  We use parallel datasets from FLORES and the Bible. In the ARC style, we use mean pooling over layers, and the English score achieved by each LLM in the ARC benchmark is used to adjust the multilingual scores. In the Belebele style, we use max pooling over layers, and the English score achieved by each LLM in Belebele is used to adjust the multilingual scores.
@@ -141,12 +141,27 @@ We use parallel datasets from FLORES and the Bible. In the ARC style, we use mea
141
  CITATION = """
142
  ## Citation
143
  ```
144
- @article{kargaran2024mexa,
145
- title = {Mexa: Multilingual Evaluation of {E}nglish-Centric {LLMs} via Cross-Lingual Alignment},
146
- author = {Kargaran, Amir Hossein and Modarressi, Ali and Nikeghbal, Nafiseh and Diesner, Jana and Yvon, François and Schütze, Hinrich},
147
- journal = {arXiv preprint},
148
- year = {2024},
149
- url = {https://github.com/cisnlp/Mexa/}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
150
  }
151
  ```
152
  """
 
1
  # coding=utf-8
2
+ # Copyright 2024 The MEXA Authors.
3
  # Lint as: python3
4
  # This space is built based on uonlp/open_multilingual_llm_leaderboard.
5
+ # MEXA Space
6
 
7
  import os
8
  import json
 
122
  """
123
 
124
 
125
+ TITLE = '<h1 align="center" id="space-title">MEXA: Multilingual Evaluation of Open English-Centric LLMs via Cross-Lingual Alignment</h1>'
126
 
127
  INTRO_TEXT = f"""
128
  ## About
129
 
130
+ We introduce MEXA, a method for assessing the multilingual capabilities of English-centric large language models (LLMs). MEXA builds on the observation that English-centric LLMs semantically use English as a kind of pivot language in their intermediate layers. Mexa computes the alignment between non-English languages and English using parallel sentences, estimating the transfer of language understanding capabilities from English to other languages through this alignment. This metric can be useful in estimating task performance, provided we know the English performance in the task and the alignment score between languages derived from a parallel dataset.
131
 
132
  ## Code
133
 
134
+ https://github.com/cisnlp/MEXA
135
 
136
  ## Details
137
  We use parallel datasets from FLORES and the Bible. In the ARC style, we use mean pooling over layers, and the English score achieved by each LLM in the ARC benchmark is used to adjust the multilingual scores. In the Belebele style, we use max pooling over layers, and the English score achieved by each LLM in Belebele is used to adjust the multilingual scores.
 
141
  CITATION = """
142
  ## Citation
143
  ```
144
+ @inproceedings{kargaran-etal-2025-mexa,
145
+ title = "{MEXA}: Multilingual Evaluation of {E}nglish-Centric {LLM}s via Cross-Lingual Alignment",
146
+ author = "Kargaran, Amir Hossein and
147
+ Modarressi, Ali and
148
+ Nikeghbal, Nafiseh and
149
+ Diesner, Jana and
150
+ Yvon, Fran{\c{c}}ois and
151
+ Schuetze, Hinrich",
152
+ editor = "Che, Wanxiang and
153
+ Nabende, Joyce and
154
+ Shutova, Ekaterina and
155
+ Pilehvar, Mohammad Taher",
156
+ booktitle = "Findings of the Association for Computational Linguistics: ACL 2025",
157
+ month = jul,
158
+ year = "2025",
159
+ address = "Vienna, Austria",
160
+ publisher = "Association for Computational Linguistics",
161
+ url = "https://aclanthology.org/2025.findings-acl.1385/",
162
+ doi = "10.18653/v1/2025.findings-acl.1385",
163
+ pages = "27001--27023",
164
+ ISBN = "979-8-89176-256-5",
165
  }
166
  ```
167
  """