Spaces:
Running
Running
| title: README | |
| emoji: ⚡ | |
| colorFrom: purple | |
| colorTo: gray | |
| sdk: static | |
| pinned: false | |
| <div align="center"> | |
| <b><font size="6">OpenGVLab</font></b> | |
| </div> | |
| Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks. | |
| # Models | |
| - [InternVL](https://github.com/OpenGVLab/InternVL): a pioneering open-source alternative to GPT-4V. | |
| - [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions. | |
| - [InternVideo](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodal understanding. | |
| - [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant for video comprehension. | |
| - [All-Seeing-Project](https://github.com/OpenGVLab/all-seeing): towards panoptic visual recognition and understanding of the open world. | |
| # Datasets | |
| - [ShareGPT4o](https://sharegpt4o.github.io/): a groundbreaking large-scale resource that we plan to open-source with 200K meticulously annotated images, 10K videos with highly descriptive captions, and 10K audio files with detailed descriptions. | |
| - [InternVid](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid): a large-scale video-text dataset for multimodal understanding and generation. | |
| - [MMPR](https://huggingface.co/datasets/OpenGVLab/MMPR): a high-quality, large-scale multimodal preference dataset. | |
| # Benchmarks | |
| - [MVBench](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2): a comprehensive benchmark for multimodal video understanding. | |
| - [CRPE](https://github.com/OpenGVLab/all-seeing/tree/main/all-seeing-v2): a benchmark covering all elements of the relation triplets (subject, predicate, object), providing a systematic platform for the evaluation of relation comprehension ability. | |
| - [MM-NIAH](https://github.com/uni-medical/GMAI-MMBench): a comprehensive benchmark for long multimodal documents comprehension. | |
| - [GMAI-MMBench](https://huggingface.co/datasets/OpenGVLab/GMAI-MMBench): a comprehensive multimodal evaluation benchmark towards general medical AI. | |