dataset_info:
- config_name: hi
features:
- name: Article Title
dtype: string
- name: Entity Name
dtype: string
- name: Wikidata ID
dtype: string
- name: English Wikipedia Title
dtype: string
- name: Image Name
dtype: image
splits:
- name: train
num_bytes: 51118097.546
num_examples: 1414
download_size: 29882467
dataset_size: 51118097.546
- config_name: id
features:
- name: Article Title
dtype: string
- name: Entity Name
dtype: string
- name: Wikidata ID
dtype: string
- name: English Wikipedia Title
dtype: string
- name: Image Name
dtype: image
splits:
- name: train
num_bytes: 52546850.192
num_examples: 1428
download_size: 32136412
dataset_size: 52546850.192
- config_name: ja
features:
- name: Article Title
dtype: string
- name: Entity Name
dtype: string
- name: Wikidata ID
dtype: string
- name: English Wikipedia Title
dtype: string
- name: Image Name
dtype: image
splits:
- name: train
num_bytes: 62643647.72
num_examples: 1720
download_size: 35163853
dataset_size: 62643647.72
- config_name: ta
features:
- name: Article Title
dtype: string
- name: Entity Name
dtype: string
- name: Wikidata ID
dtype: string
- name: English Wikipedia Title
dtype: string
- name: Image Name
dtype: image
splits:
- name: train
num_bytes: 44337774.542
num_examples: 1254
download_size: 30111872
dataset_size: 44337774.542
- config_name: vi
features:
- name: Article Title
dtype: string
- name: Entity Name
dtype: string
- name: Wikidata ID
dtype: string
- name: English Wikipedia Title
dtype: string
- name: Image Name
dtype: image
splits:
- name: train
num_bytes: 46272154.251
num_examples: 1343
download_size: 27669139
dataset_size: 46272154.251
configs:
- config_name: hi
data_files:
- split: train
path: hi/train-*
- config_name: id
data_files:
- split: train
path: id/train-*
- config_name: ja
data_files:
- split: train
path: ja/train-*
- config_name: ta
data_files:
- split: train
path: ta/train-*
- config_name: vi
data_files:
- split: train
path: vi/train-*
MERLIN Dataset Card
Dataset Description
Overview
MERLIN (Multilingual Entity Recognition and Linking) is a test dataset for evaluating multilingual entity linking systems with multimodal inputs. It consists of BBC news article titles in multiple languages, each paired with an associated image and entity annotations. The dataset contains 7,287 entity mentions linked to 2,480 unique Wikidata entities, covering a wide range of categories (persons, locations, organizations, events, etc.).
Supported Tasks
- Multimodal Entity Linking – disambiguating entity mentions using both text and images.
- Cross-lingual Entity Linking – linking mentions in one language to Wikidata entities regardless of language.
- Named Entity Recognition – identifying entity mentions in non-English news titles.
Languages
- Hindi
- Japanese
- Indonesian
- Vietnamese
- Tamil
Data Instances
Each instance in the dataset contains:
{
"Article_Title": "बिहार: केंद्रीय मंत्री अश्विनी चौबे के बेटे अर्जित 'गिरफ्तार'",
"Entity_Name": "अश्विनी चौबे",
"Wikidata_ID": "Q16728021",
"English_Wikipedia_Title": "Ashwini Kumar Choubey",
"Image_Name": "<GCS_URL>"
}
Data Fields
- Article_Title: News article title in its original language (string)
- Entity_Name: Entity mention in the same language (string)
- Wikidata_ID: Wikidata identifier for the entity (string)
- English_Wikipedia_Title: English Wikipedia page title (string)
- Image_Name: Associated image filename/URL (string)
Data Splits
- The dataset contains only a test split, with 5,000 article titles (1,000 per language).
Dataset Creation
Source Data
- Derived from the M3LS dataset (Verma et al., 2023), which was curated from BBC News articles spanning over a decade.
- Articles include categories like politics, sports, economy, science, and technology.
- Each article includes a headline and an associated image.
Annotations
- Tool used: INCEpTION annotation platform.
- Knowledge base: Wikidata.
- Process:
- Annotators highlighted entity mentions in article titles and linked them to Wikidata entries.
- Each title was annotated by three annotators, with majority voting used for final selection.
- Annotators were recruited via Prolific with prescreening (required F1 ≥ 60% on English pilot tasks).
- Agreement: Average inter-annotator Cohen’s Kappa ≈ 0.83 (almost perfect agreement).
Dataset Structure
Example
{
"Article_Title": "बिहार: केंद्रीय मंत्री अश्विनी चौबे के बेटे अर्जित 'गिरफ्तार'",
"Entity_Name": "अश्विनी चौबे",
"Wikidata_ID": "Q16728021",
"English_Wikipedia_Title": "Ashwini Kumar Choubey",
"Image_Name": "<GCS_URL>"
}
Data Statistics
- Total article titles: 5,000
- Total mentions: 7,287
- Unique entities: 2,480
- Languages covered: Hindi, Japanese, Indonesian, Tamil, Vietnamese
- Avg. words per title: ~11.1
- Unlinked mentions: 1,243 (excluded from benchmark tasks)
Curation Rationale
MERLIN was created to provide the first multilingual multimodal entity linking benchmark, addressing the gap where existing datasets are either monolingual or text-only. It enables studying how images can resolve ambiguity in entity mentions, especially in low-resource languages.
Considerations for Using the Data
Social Impact
- Supports fairer multilingual NLP research, by including low-resource languages (Tamil, Vietnamese).
- Encourages development of models robust to both text and images.
Discussion of Biases
- All data is from BBC News, limiting genre diversity.
- Annotators’ background knowledge and language proficiency may introduce subtle biases.
- Wikidata coverage bias: entities absent from Wikidata were excluded (≈17% of mentions unlinked).
Other Known Limitations
- Domain restriction (news only).
- Focused on entity mentions in headlines, not longer text.
- Baseline methods link to Wikipedia titles rather than pure Wikidata QIDs.
Additional Information
Dataset Curators
- Carnegie Mellon University (CMU)
- Defence Science and Technology Agency, Singapore
Licensing Information
- The dataset is released for research purposes only, under the license specified in the GitHub repository.
Citation Information
If you use MERLIN, cite:
Ramamoorthy, S., Shah, V., Khanuja, S., Sheikh, Z., Jie, S., Chia, A., Chua, S., & Neubig, G. (2025). MERLIN: A Testbed for Multilingual Multimodal Entity Recognition and Linking. Transactions of the Association for Computational Linguistics.
Contributions
Community contributions can be made via the MERLIN GitHub repo.
Related Work
This dataset can be benchmarked with: