codelion commited on
Commit
6fbca6d
·
verified ·
1 Parent(s): 6a9e620

Add citation and methodology info

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md CHANGED
@@ -47,3 +47,31 @@ configs:
47
  - split: train
48
  path: data/train-*
49
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  - split: train
48
  path: data/train-*
49
  ---
50
+
51
+
52
+ ## Sampling Methodology
53
+
54
+ This dataset was created using **reservoir sampling**, a statistically unbiased random sampling algorithm that guarantees each sample from the source dataset has an equal probability of being included. This ensures the 1B token sample is representative of the full dataset's characteristics.
55
+
56
+ **Source Dataset**: [HuggingFaceFW/finepdfs](https://huggingface.co/datasets/HuggingFaceFW/finepdfs)
57
+ **Sample Size**: 1B tokens
58
+ **Content**: High-quality textbook-style pdfs
59
+
60
+ Reservoir sampling enables rapid experimentation and ablation studies without processing the entire source dataset, while maintaining statistical validity of results.
61
+
62
+ For details on how this dataset was used in optimal pre-training data composition research, see the [blog post](https://huggingface.co/blog/codelion/optimal-dataset-mixing/).
63
+
64
+ ## Citation
65
+
66
+ If you use this model/dataset, please cite:
67
+
68
+ ```bibtex
69
+ @article{sharma2025billion,
70
+ title={The 1 Billion Token Challenge: Finding the Perfect Pre-training Mix},
71
+ author={Sharma, Asankhaya},
72
+ year={2025},
73
+ url={https://huggingface.co/blog/codelion/optimal-dataset-mixing/}
74
+ }
75
+ ```
76
+
77
+ For more details, see the [blog post](https://huggingface.co/blog/codelion/optimal-dataset-mixing/).