Update model card: Add library_name, paper/code links, transformers usage, and deployment info
#1
by
nielsr
HF Staff
- opened
This PR significantly enhances the model card for the Ring-1T model by:
- Adding
library_name: transformersto the metadata: This enables the automated "how to use" widget on the Hugging Face Hub, providing users with automated code snippets for easy integration with thetransformerslibrary. - Aligning the main title of the model card with the official paper title: "Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model".
- Including a direct link to the Hugging Face paper page: Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model in the introductory section.
- Adding a prominent link to the GitHub repository: https://github.com/inclusionAI/Ring-V2 for quick access to the code.
- Integrating a
transformerscode snippet for quick model usage, as found in the original GitHub README, under the Quickstart section. - Updating the SGLang and vLLM deployment sections with more comprehensive environment preparation and usage instructions from the GitHub repository.
- Adding the BibTeX citation for the paper.
These updates collectively improve the discoverability, usability, and completeness of the model card on the Hugging Face Hub.
Thanks Niels, it is pretty helpful update. Appreciate your continued help.
RichardBian
changed pull request status to
merged