Spaces:
Configuration error
Configuration error
Adopt a more concise style of ORG card like other labs
Browse files
README.md
CHANGED
|
@@ -3,44 +3,9 @@ license: apache-2.0
|
|
| 3 |
emoji: π
|
| 4 |
pinned: true
|
| 5 |
---
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
**Hi there** π Welcome to the official homepage for inclusionAI, home for Ant Group's Artificial General Intelligence (AGI) initiative.
|
| 9 |
-
|
| 10 |
-
In here you can find Large Language Models (LLM), Reinforcement Learning (RL) or other systems related to model training and inference, and other AGI-related frameworks or applications.
|
| 11 |
-
|
| 12 |
-
## Our Models
|
| 13 |
-
|
| 14 |
-
- [**Ling**](https://huggingface.co/collections/inclusionAI/ling-v2-68bf1dd2fc34c306c1fa6f86): The general-use line, with SKUs like mini (lightning-fast) and flash (solid performance), scaling up to our 1T model (under development).
|
| 15 |
-
- [**Ring**](https://huggingface.co/collections/inclusionAI/ring-v2-68db3941a6c4e984dd2015fa): The deep reasoning and cognitive variant, with SKUs from mini (cost-efficient) to flash (well-rounded answers), also featuring a 1T flagship (under development).
|
| 16 |
-
- [**Ming**](https://huggingface.co/collections/inclusionAI/ming-680afbb62c4b584e1d7848e8): The any-to-any models, unified multimodal model capable of processing images, text, audio, and video, while demonstrating strong proficiency in both speech and image generation.
|
| 17 |
-
- [**LLaDA**](https://huggingface.co/collections/inclusionAI/llada-68c141bca386b06b599cfe45): LLaDA diffusion language model developed by the AGI Center, Ant Research Institutute.
|
| 18 |
-
- [**GroveMoE**](https://huggingface.co/collections/inclusionAI/grovemoe-68a2b58acbb55827244ef664): GroveMoE is an open-source family of LLMs developed by the AGI Center, Ant Research Institute.
|
| 19 |
-
- [**UI-Venus**](https://huggingface.co/collections/inclusionAI/ui-venus-689f2fb01a4234cbce91c56a): UI-Venus is a native UI agent based on the Qwen2.5-VL multimodal large language model, designed to perform precise GUI element grounding and effective navigation using only screenshots as input.
|
| 20 |
-
- ...
|
| 21 |
|
|
|
|
|
|
|
|
|
|
| 22 |

|
| 23 |
-
|
| 24 |
-
### Get Involved
|
| 25 |
-
|
| 26 |
-
Our work is guided by the principles of fairness, transparency, and collaboration, and we are dedicated to creating models that reflect the diversity of the world we live in.
|
| 27 |
-
Whether you're a researcher, developer, or simply someone passionate about AI, we invite you to join us in our mission to create AI that benefits everyone.
|
| 28 |
-
|
| 29 |
-
- **Explore Our Models**: Check out our latest models and datasets on the inclusionAI Hub.
|
| 30 |
-
- **Contribute**: Interested in contributing? Visit our [GitHub](https://github.com/inclusionAI) repository to get started.
|
| 31 |
-
- **Join the Conversation**: Connect with Ant Ling team at [Twitter](https://x.com/AntLingAGI), Ant Open Source team at [Twitter](https://x.com/ant_oss) or the community at [Discord](https://discord.gg/2X4zBSz9c6) to stay updated on our latest projects and initiatives.
|
| 32 |
-
|
| 33 |
-
Most models from inclusionAI can be found at our partners' hosting service. Feel free to experience the models at [SiliconFlow](https://www.siliconflow.com/) or [ZenMux.ai](https://zenmux.ai/).
|
| 34 |
-
|
| 35 |
-
## What's New
|
| 36 |
-
- [2025/9/30] βπ» [inclusionAI/Ring-1T-preview](https://huggingface.co/inclusionAI/Ring-1T-preview)
|
| 37 |
-
- [2025/9/28] βπ» [inclusionAI/Ring-flash-linear-2.0](https://huggingface.co/inclusionAI/Ring-flash-linear-2.0)
|
| 38 |
-
- [2025/9/24] βπ» [inclusionAI/Ling-flash-2.0-GGUF](https://huggingface.co/inclusionAI/Ling-flash-2.0-GGUF)
|
| 39 |
-
- [2025/9/24] βπ» [inclusionAI/Ring-mini-2.0-GGUF](https://huggingface.co/inclusionAI/Ring-mini-2.0-GGUF)
|
| 40 |
-
- [2025/9/24] βπ» [inclusionAI/Ling-mini-2.0-GGUF](https://huggingface.co/inclusionAI/Ling-mini-2.0-GGUF)
|
| 41 |
-
- [2025/9/18] βπ» [inclusionAI/ASearcher-Web-QwQ-V2](https://huggingface.co/inclusionAI/ASearcher-Web-QwQ-V2)
|
| 42 |
-
- [2025/9/17] βπ» [inclusionAI/Ling-flash-base-2.0](https://huggingface.co/inclusionAI/Ling-flash-base-2.0)
|
| 43 |
-
- [2025/9/17] βπ» [inclusionAI/Ling-flash-2.0](https://huggingface.co/inclusionAI/Ling-flash-2.0)
|
| 44 |
-
- [2025/9/11] βπ» [inclusionAI/LLaDA-MoE-7B-A1B-Instruct](https://huggingface.co/inclusionAI/LLaDA-MoE-7B-A1B-Instruct)
|
| 45 |
-
- [2025/9/09] βπ» [inclusionAI/Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)
|
| 46 |
-
- [2025/9/09] βπ» [inclusionAI/Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0) ([Ling-V2 Collection](https://huggingface.co/collections/inclusionAI/ling-v2-68bf1dd2fc34c306c1fa6f86))
|
|
|
|
| 3 |
emoji: π
|
| 4 |
pinned: true
|
| 5 |
---
|
| 6 |
+
**Hi there** π Welcome to inclusionAI, home for Ant Group's Artificial General Intelligence (AGI) initiative. In here you can find Large Language Models (LLM), Reinforcement Learning (RL) or other systems related to models.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
+
- Follow [our GitHub](https://github.com/inclusionAI), Ant Ling team's X [Twitter](https://x.com/AntLingAGI) or Ant OSS team's X [Twitter](https://x.com/ant_oss) for new model releases or open source updates.
|
| 9 |
+
- Stay connected at [Discord](https://discord.gg/2X4zBSz9c6)
|
| 10 |
+
- Experience the models at [SiliconFlow](https://www.siliconflow.com/) or [ZenMux](https://zenmux.ai/)
|
| 11 |

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|