Alternatives to Content Snapshots
Currently the dataset includes a content column with the actual GH source file contents, a snapshot taken at some moment-in-time. This makes the dataset much heavier than it needs to be, and makes it easily obsolescent as soon as the file changes in the source repo. (Is there even a guarantee that the files all work together without bugs, or even build, at the moment they were collected?)
I would like to recommend storing a URI pointing to the filename at the HEAD commit hash on GitHub. Dataset consumers would then make a request to GitHub to download the contents at that point-in-time.
As an example, this is the URI to an earlier version of this dataset's README.
The advantages of changing this would be:
- Your dataset can be downloaded much faster, taking-up less storage space on everybody's systems (yours, users, Hugging Face's).
- Dataset consumers may be able to make an informed decision on whether or not to fetch the content (querying the star level of the GH repo based on the useful repo_id column, license of that repo, activity of that repo, etc.)
Thank you for considering this suggestion and having put the time into compiling this up-to-date code dataset.
Thanks for the suggestion! You're right that storing pointers instead of content would reduce dataset size, but there are critical trade-offs that make this approach problematic for actual model training:
Why we store content directly:
- Reliability: Training pipelines can't handle random 404s when repos get deleted/moved
- Consistency: Every user gets the exact same data, ensuring reproducible results
- Performance: No API rate limiting or external burden during training runs
- Completeness: The dataset works offline and won't degrade over time
The reality of code datasets:
- All major code datasets (BigCode, The Stack, etc.) use static snapshots for these reasons
- My plan is annual updates to balance freshness with reliability
- If users want the absolute latest code, they can use GitHub directly
This approach has proven successful because it actually works for real training workflows.
Thanks again for engaging - I'm closing this discussion since the core approach is well-established in the field.