|
|
--- |
|
|
license: cc-by-4.0 |
|
|
language: |
|
|
- la |
|
|
- fr |
|
|
- it |
|
|
tags: |
|
|
- medieval |
|
|
pretty_name: CoMMA |
|
|
size_categories: |
|
|
- 1B<n<10B |
|
|
--- |
|
|
# Dataset Card for CoMMA JSON-L |
|
|
|
|
|
CoMMA is a large-scale corpus of digitized medieval manuscripts |
|
|
transcribed using Handwritten Text Recognition (HTR). It |
|
|
contains over 2.5 billion tokens from more than 23,000 manuscripts in |
|
|
Latin and Old French (801–1600 CE). Unlike most existing resources, the corpus |
|
|
provides raw, non-normalized text. |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
|
|
|
|
|
|
|
|
- **Curated by:** Thibault Clérice |
|
|
- **Funded by:** Inria, [COLaF](https://colaf.huma-num.fr/), [ParamHTRs](https://www.bnf.fr/fr/les-projets-de-recherche-bnf-datalab) |
|
|
- **Language(s) (NLP):** Latin, Old French, Italian |
|
|
- **License:** CC-BY 4.0 |
|
|
|
|
|
### Dataset Sources [optional] |
|
|
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
|
|
- **Repository:** ARCA, Gallica, Biblissima + (Metadata) |
|
|
- **Paper:** [More Information Needed] |
|
|
- **Browser:** [More Information Needed] |
|
|
|
|
|
## Uses |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
- Training and evaluation of NLP models on medieval Latin and French. |
|
|
- Historical linguistics and corpus linguistics research. |
|
|
- Digital humanities applications (script analysis, layout studies, philology). |
|
|
- Pretraining embeddings for downstream semantic tasks. |
|
|
|
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
- Modern language processing tasks. |
|
|
- Sensitive/identity analysis (texts are historical and not linked to personal data). |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset is in JSON-L format, one line = one digitization of a manuscript |
|
|
(manuscript can be represented by more than one digitization). Columns are: |
|
|
|
|
|
- **biblissima_id**: Unique identifier of the manuscript, with metadata. .e.g https://data.biblissima.fr/entity/Q237292 |
|
|
- **shelfmark**: Human readable identifier |
|
|
- **iiif_manifest**: Source of our data |
|
|
- **biblissima_language**: Biblissima provided language metadata |
|
|
- **biblissima_simplified_language**: Denoising field for **biblissima_language** |
|
|
- **language_fasttext**: Categorization in 5 languages (Latin, French, Bilingual, Other, Ambiguous), with two levels of details for Latin, French and Bilingual (e.g. Massively French, Truely French) |
|
|
- **notBefore**: Minimal date of production. Some provider use 800 for stating 9th century instead of 801, be careful with the date. |
|
|
- **notAfter**: When provided, maximum date of production. Mostly *null*. |
|
|
- **lines**: Number of transcribed lines. |
|
|
- **pages**: Number of treated pages. |
|
|
- **tokens**: Number of whitespace delimited tokens. |
|
|
- **scopecontent**: Free-text field description of the content of the manuscript, provided by Biblissima and the original curating institution. |
|
|
- **text**: The main body of text, in its plain text representation. |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
To provide the first large-scale, open, raw-text corpus of medieval manuscripts enabling both computational linguistics and digital humanities research at scale. |
|
|
|
|
|
### Source Data |
|
|
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
|
|
|
|
#### Data Collection and Processing |
|
|
|
|
|
- Harvested via IIIF from Gallica (BnF), ARCA, Bodleian, e-codices, etc. |
|
|
- Downloaded in batch respecting institutional constraints. |
|
|
- Segmentation: YOLOv11 + SegmOnto vocabulary. |
|
|
- Recognition: Kraken with CATMuS models. |
|
|
- Post-processed into ALTO and TEI. |
|
|
|
|
|
#### Who are the source data producers? |
|
|
|
|
|
Medieval scribes and copyists (8th–16th c. CE), preserved in institutional digitizations. |
|
|
|
|
|
#### Annotation process |
|
|
|
|
|
- Automated segmentation and transcription. |
|
|
- Manual evaluation of CER on roughly 700 manuscripts single pages. |
|
|
- TEI structuring for zones (marginalia, main text, etc.). |
|
|
|
|
|
#### Personal and Sensitive Information |
|
|
|
|
|
The dataset contains no personal or sensitive modern data. |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
|
|
- Recognition quality varies by script type (Caroline/Textualis better, Cursiva and Beneventan worse). |
|
|
- Language metadata may be noisy (e.g. mixed Latin/French glosses). |
|
|
- Manuscripts are primarily from libraries, underrepresenting archives (e.g. charters). |
|
|
- Errors in segmentation (skewed lines, faint text) persist. |
|
|
|
|
|
### Recommendations |
|
|
|
|
|
Users should evaluate CER for their subcorpus and be aware of biases in script and manuscript type coverage. |
|
|
|
|
|
## Citation [optional] |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
**APA:** |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Dataset Card Contact |
|
|
|
|
|
Thibault Clerice |