mandipgoswami commited on
Commit
940e7a4
Β·
verified Β·
1 Parent(s): ddfedb0

Upload 7 files

Browse files
Files changed (7) hide show
  1. .gitignore +5 -0
  2. CHANGELOG.md +4 -0
  3. CITATION.cff +10 -0
  4. LICENSE +2 -0
  5. README.md +48 -3
  6. SUBMITTING.md +27 -0
  7. dataset_card.md +81 -0
.gitignore ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ .venv/
2
+ __pycache__/
3
+ *.pyc
4
+ .DS_Store
5
+ Thumbs.db
CHANGELOG.md ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Changelog
2
+
3
+ ## v1.0.0
4
+ - Initial public scaffold: loader, scripts, benchmarks, docs (no data).
CITATION.cff ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ cff-version: 1.2.0
2
+ message: "If you use this dataset or code, please cite."
3
+ title: "RIR-Mega"
4
+ authors:
5
+ - family-names: Goswami
6
+ given-names: Mandip
7
+ date-released: "2025-10-19"
8
+ doi: "10.5281/zenodo.17387402"
9
+ version: "1.0.0"
10
+ repository-code: "https://github.com/mandip42/rirmega"
LICENSE ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ CC BY 4.0 for metadata and docs; audio license is author-specified (e.g., CC BY-NC 4.0).
2
+
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RIR-Mega
2
+
3
+ RIR-Mega is a large-scale collection of room impulse responses (RIRs) with rich metadata for dereverberation, robust ASR, source localization, and room acoustics research.
4
+
5
+ **This scaffold excludes data** (`data/audio/*`, `data/metadata/*.csv`/`.json`). Insert those locally before uploading to Hugging Face.
6
+
7
+ - **Zenodo DOI (code/concept)**: https://doi.org/10.5281/zenodo.17387402
8
+ - **GitHub (code)**: https://github.com/mandip42/rirmega
9
+
10
+ ## Layout
11
+ ```
12
+ rirmega/
13
+ β”œβ”€ README.md
14
+ β”œβ”€ LICENSE
15
+ β”œβ”€ CHANGELOG.md
16
+ β”œβ”€ CITATION.cff
17
+ β”œβ”€ dataset_card.md
18
+ β”œβ”€ .gitattributes
19
+ β”œβ”€ .gitignore
20
+ β”œβ”€ rirmega/
21
+ β”‚ β”œβ”€ __init__.py
22
+ β”‚ β”œβ”€ dataset.py
23
+ β”‚ └─ schema.py
24
+ β”œβ”€ scripts/
25
+ β”‚ β”œβ”€ make_checksums.py
26
+ β”‚ β”œβ”€ validate_metadata.py
27
+ β”‚ β”œβ”€ to_sofa.py
28
+ β”‚ └─ upload_to_hf.py
29
+ β”œβ”€ benchmarks/
30
+ β”‚ β”œβ”€ rt60_regression/
31
+ β”‚ β”‚ β”œβ”€ train_rt60.py
32
+ β”‚ β”‚ └─ README.md
33
+ β”‚ └─ dereverb_sisdr/
34
+ β”‚ β”œβ”€ baseline_dereverb.py
35
+ β”‚ └─ README.md
36
+ └─ data/
37
+ β”œβ”€ audio/ # (EXCLUDED in this zip β€” add your .wav/.flac)
38
+ └─ metadata/ # (EXCLUDED in this zip β€” add your metadata.csv/json)
39
+ ```
40
+
41
+ ## Quick start
42
+ 1. Add your `data/audio/*` and `data/metadata/metadata.csv` (and any folds/JSONs).
43
+ 2. (Optional) Run checksum + validation:
44
+ ```bash
45
+ python scripts/make_checksums.py
46
+ python scripts/validate_metadata.py data/metadata/metadata.csv
47
+ ```
48
+ 3. Upload to Hugging Face (choose Git or programmatic), see `scripts/upload_to_hf.py`.
SUBMITTING.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Submitting to the RIR-Mega Leaderboard
2
+
3
+ Thanks for evaluating on RIR-Mega! To submit a score, open a Pull Request that:
4
+ 1) Appends a row to the **Leaderboard** table in `dataset_card.md`.
5
+ 2) Includes the info below in the PR description (template provided).
6
+
7
+ ## Required details
8
+ - **Method name + link** to code (GitHub/HF Space/Gist)
9
+ - **Exact command** used, including `--target` if set
10
+ - **Dataset tag** used (e.g., `v1.0.0`) and your commit hash if relevant
11
+ - **Seed** (default baseline uses `random_state=0`)
12
+ - **Train/Valid sizes** used (number of samples consumed)
13
+ - **MAE (s)** and **RMSE (s)**
14
+
15
+ ## Reproducing the baseline
16
+ ```bash
17
+ pip install soundfile numpy pandas scikit-learn
18
+ python benchmarks/rt60_regression/train_rt60.py
19
+ # or specify a target key that exists in metrics:
20
+ python benchmarks/rt60_regression/train_rt60.py --target rt60
21
+ ```
22
+
23
+ ## Targets
24
+ Common keys found in `metrics` (case-sensitive):
25
+ `rt60`, `drr_db`, `c50_db`, `c80_db`, `band_rt60s.125`, `band_rt60s.250`, `band_rt60s.500`, `band_rt60s.1000`, `band_rt60s.2000`, `band_rt60s.4000`
26
+
27
+ If you use a different target, please document it clearly.
dataset_card.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RIR-Mega
2
+
3
+ Large-scale room impulse responses with rich metadata for dereverberation, robust ASR, localization, and room acoustics research.
4
+
5
+ - **DOI**: https://doi.org/10.5281/zenodo.17387402
6
+ - **GitHub**: https://github.com/mandip42/rirmega
7
+
8
+ ## ✨ What’s inside
9
+ - `data/` β€” RIR audio and `metadata/metadata.csv` (compact schema)
10
+ - `rirmega/dataset.py` β€” Hugging Face Datasets loader
11
+ - `benchmarks/rt60_regression/` β€” a lightweight RT60 regression baseline
12
+ - `scripts/` β€” utilities (validation, checksums, mini subset)
13
+ - *(optional)* `data-mini/` β€” tiny subset for quick demos and Spaces
14
+
15
+ ## πŸ“¦ Schema (compact)
16
+ Required CSV columns:
17
+ `id, family, split, seed_room, fs, wav, room_size, absorption, absorption_bands, max_order, source, microphone, array, metrics, rng_seed`
18
+
19
+ - `wav`: path to audio (relative to `data/` by default)
20
+ - `fs`: sample rate (Hz)
21
+ - `metrics`: JSON/dict-like string; may include keys like:
22
+ `rt60`, `drr_db`, `c50_db`, `c80_db`, `band_rt60s.{125,250,500,1000,2000,4000}`
23
+
24
+ ## πŸš€ Getting started
25
+ ```python
26
+ from datasets import load_dataset
27
+ ds = load_dataset("mandipgoswami/rirmega", name="default", trust_remote_code=True)
28
+ print(ds)
29
+ ex = ds["train"][0]
30
+ audio = ex["audio"] # dict with 'path' and array (on access)
31
+ print(ex["sample_rate"], ex["file_path"])
32
+ ```
33
+
34
+ ## πŸ§ͺ Baseline: RT60 regression
35
+ Lightweight features + RandomForest to predict RT60-like targets from RIR signals.
36
+
37
+ ```bash
38
+ pip install soundfile numpy pandas scikit-learn
39
+ python benchmarks/rt60_regression/train_rt60.py
40
+ # or choose a specific target key present in `metrics`
41
+ python benchmarks/rt60_regression/train_rt60.py --target rt60
42
+ ```
43
+ **Default target search order:**
44
+ `rt60, drr_db, c50_db, c80_db, band_rt60s.125, 250, 500, 1000, 2000, 4000`
45
+
46
+ ### Reference numbers (example)
47
+ - Train/Valid used: 36,000 / 4,000 (auto 10% valid)
48
+ - Metric: MAE = **0.013 s**, RMSE = **0.022 s** (auto target)
49
+
50
+ ## πŸ… Leaderboard (RT60 regression)
51
+ | Date | Team / Author | Method | Target | Train/Valid | MAE (s) | RMSE (s) | Seed | Code |
52
+ |---|---|---|---|---|---:|---:|---:|---|
53
+ | 2025-10-19 | Baseline (RIR-Mega) | RF on light feats | auto | 36k / 4k | 0.013 | 0.022 | 0 | `benchmarks/rt60_regression` |
54
+
55
+ > πŸ“« **Submit a result:** Open a PR adding a row (see **Submitting**).
56
+
57
+ ## πŸ“€ Submitting
58
+ See **SUBMITTING.md** for rules and a PR template. Minimum info:
59
+ - Command (incl. `--target` if used), seed, dataset tag (e.g., `v1.0.0`)
60
+ - Train/Valid sizes used
61
+ - MAE (s) and RMSE (s)
62
+ - Link to code (repo, gist, or HF Space)
63
+
64
+ ## πŸ”– Citation
65
+ Please cite the dataset:
66
+ ```
67
+ @dataset{RIRMega2025,
68
+ title={RIR-Mega},
69
+ author={Goswami, Mandip},
70
+ year={2025},
71
+ doi={10.5281/zenodo.17387402},
72
+ url={https://github.com/mandip42/rirmega}
73
+ }
74
+ ```
75
+
76
+ ## πŸ“œ Licenses
77
+ - Metadata & docs: CC BY 4.0
78
+ - Audio: specify here (e.g., CC BY-NC 4.0). If mixed, list per-subset.
79
+
80
+ ## 🏷️ Tags
81
+ audio, rir, acoustics, dereverberation, robust-asr, simulation, room-acoustics