File size: 4,593 Bytes
5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa 5bcd5e5 fcc80aa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
---
license: cc-by-4.0
language:
- la
- fr
- it
tags:
- medieval
pretty_name: CoMMA
size_categories:
- 1B<n<10B
---
# Dataset Card for CoMMA JSON-L
CoMMA is a large-scale corpus of digitized medieval manuscripts
transcribed using Handwritten Text Recognition (HTR). It
contains over 2.5 billion tokens from more than 23,000 manuscripts in
Latin and Old French (801–1600 CE). Unlike most existing resources, the corpus
provides raw, non-normalized text.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Thibault Clérice
- **Funded by:** Inria, [COLaF](https://colaf.huma-num.fr/), [ParamHTRs](https://www.bnf.fr/fr/les-projets-de-recherche-bnf-datalab)
- **Language(s) (NLP):** Latin, Old French, Italian
- **License:** CC-BY 4.0
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** ARCA, Gallica, Biblissima + (Metadata)
- **Paper:** [More Information Needed]
- **Browser:** [More Information Needed]
## Uses
### Direct Use
- Training and evaluation of NLP models on medieval Latin and French.
- Historical linguistics and corpus linguistics research.
- Digital humanities applications (script analysis, layout studies, philology).
- Pretraining embeddings for downstream semantic tasks.
### Out-of-Scope Use
- Modern language processing tasks.
- Sensitive/identity analysis (texts are historical and not linked to personal data).
## Dataset Structure
The dataset is in JSON-L format, one line = one digitization of a manuscript
(manuscript can be represented by more than one digitization). Columns are:
- **biblissima_id**: Unique identifier of the manuscript, with metadata. .e.g https://data.biblissima.fr/entity/Q237292
- **shelfmark**: Human readable identifier
- **iiif_manifest**: Source of our data
- **biblissima_language**: Biblissima provided language metadata
- **biblissima_simplified_language**: Denoising field for **biblissima_language**
- **language_fasttext**: Categorization in 5 languages (Latin, French, Bilingual, Other, Ambiguous), with two levels of details for Latin, French and Bilingual (e.g. Massively French, Truely French)
- **notBefore**: Minimal date of production. Some provider use 800 for stating 9th century instead of 801, be careful with the date.
- **notAfter**: When provided, maximum date of production. Mostly *null*.
- **lines**: Number of transcribed lines.
- **pages**: Number of treated pages.
- **tokens**: Number of whitespace delimited tokens.
- **scopecontent**: Free-text field description of the content of the manuscript, provided by Biblissima and the original curating institution.
- **text**: The main body of text, in its plain text representation.
## Dataset Creation
### Curation Rationale
To provide the first large-scale, open, raw-text corpus of medieval manuscripts enabling both computational linguistics and digital humanities research at scale.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
- Harvested via IIIF from Gallica (BnF), ARCA, Bodleian, e-codices, etc.
- Downloaded in batch respecting institutional constraints.
- Segmentation: YOLOv11 + SegmOnto vocabulary.
- Recognition: Kraken with CATMuS models.
- Post-processed into ALTO and TEI.
#### Who are the source data producers?
Medieval scribes and copyists (8th–16th c. CE), preserved in institutional digitizations.
#### Annotation process
- Automated segmentation and transcription.
- Manual evaluation of CER on roughly 700 manuscripts single pages.
- TEI structuring for zones (marginalia, main text, etc.).
#### Personal and Sensitive Information
The dataset contains no personal or sensitive modern data.
## Bias, Risks, and Limitations
- Recognition quality varies by script type (Caroline/Textualis better, Cursiva and Beneventan worse).
- Language metadata may be noisy (e.g. mixed Latin/French glosses).
- Manuscripts are primarily from libraries, underrepresenting archives (e.g. charters).
- Errors in segmentation (skewed lines, faint text) persist.
### Recommendations
Users should evaluate CER for their subcorpus and be aware of biases in script and manuscript type coverage.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Dataset Card Contact
Thibault Clerice |