Update README.md
Browse files
README.md
CHANGED
|
@@ -4,7 +4,25 @@ emoji: 🚀
|
|
| 4 |
colorFrom: pink
|
| 5 |
colorTo: blue
|
| 6 |
sdk: static
|
| 7 |
-
pinned:
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
colorFrom: pink
|
| 5 |
colorTo: blue
|
| 6 |
sdk: static
|
| 7 |
+
pinned: true
|
| 8 |
+
license: openrail
|
| 9 |
---
|
| 10 |
|
| 11 |
+
The NDEM community provides pretrained models along with their checkpoint with the purpose of:
|
| 12 |
+
|
| 13 |
+
- Studying the learning dynamics of the models
|
| 14 |
+
- Studying how well these learning dynamics match brain learning dynamics
|
| 15 |
+
|
| 16 |
+
Models are pretrained on the Jean-Zay public supercluster:
|
| 17 |
+
This work was granted access to the HPC resources of IDRIS under the allocations
|
| 18 |
+
2023-AD011014524 and 2022-AD011013176R1 made by GENCI (P.Orhan).
|
| 19 |
+
|
| 20 |
+
Models currently available are:
|
| 21 |
+
|
| 22 |
+
- Wav2vec2 base model () pretrained (no fine-tuning) on Librispeech (English speech), FMA (music), subset of audioset, or all of them together. It also includes a model pretrained on VoxPopuli french dataset.
|
| 23 |
+
- Wav2vec2 tiny model, where we used only 3 transformer layers. Models' performances are surprisingly high.
|
| 24 |
+
|
| 25 |
+
Scientific papers using the models provided in this repository:
|
| 26 |
+
Orhan, P., Boubenec, Y., & King, J.-R. (2024). Algebraic structures emerge from the self-supervised learning of natural sounds. https://doi.org/10.1101/2024.03.13.584776
|
| 27 |
+
|
| 28 |
+
Models are pretrained using HuggingFace's trainer.
|