Update model card: Add library_name, paper/code links, transformers usage, and deployment info

#1
by nielsr HF Staff - opened

This PR significantly enhances the model card for the Ring-1T model by:

  • Adding library_name: transformers to the metadata: This enables the automated "how to use" widget on the Hugging Face Hub, providing users with automated code snippets for easy integration with the transformers library.
  • Aligning the main title of the model card with the official paper title: "Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model".
  • Including a direct link to the Hugging Face paper page: Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model in the introductory section.
  • Adding a prominent link to the GitHub repository: https://github.com/inclusionAI/Ring-V2 for quick access to the code.
  • Integrating a transformers code snippet for quick model usage, as found in the original GitHub README, under the Quickstart section.
  • Updating the SGLang and vLLM deployment sections with more comprehensive environment preparation and usage instructions from the GitHub repository.
  • Adding the BibTeX citation for the paper.

These updates collectively improve the discoverability, usability, and completeness of the model card on the Hugging Face Hub.

inclusionAI org

Thanks Niels, it is pretty helpful update. Appreciate your continued help.

RichardBian changed pull request status to merged

Sign up or log in to comment