YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

HunyuanViT-v1

GitHub | [Hunyuan-Large-Vision Techical Report] |

image

HunyuanViT is a state-of-the-art multi-modal visual encoder with native resolution inputs and strong vision-language alignment ability. The model is carefully trained on visual-language grounding, reasoning, and understanding data to build a modern visual encoder for the multi-modal learning fields.

Model Details

  • Architecture:
Visual Encoder Patch Size #Layers Hidden Size Intermediate Size #Attention Heads Activation Function
Hunyuan ViT 16 40 1,536 6,144 16 GELU
  • Model Stats:

    • Params (M): 1B
    • Image size: Any Resolution
  • Performance Comparisons

Performance Comparisons

Quick Start

Setup

Please install flash_attn before using HunyuanViT.

pip install flash-attn --no-build-isolation

Usage

import torch
from PIL import Image
from transformers import AutoProcessor, AutoModel

model = AutoModel.from_pretrained("tencent/HunyuanViT",
                                trust_remote_code=True,
                                torch_dtype=torch.bfloat16)
processor = AutoProcessor.from_pretrained("tencent/HunyuanViT",
                                        trust_remote_code=True)

model = model.eval().to('cuda')
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=[image], return_tensors="pt").to(torch.bfloat16).cuda()
image_features = model.get_image_features(**inputs)

Citation

If you find our work helpful, feel free to give us a cite.

@misc{sun2024hunyuanlargeopensourcemoemodel,
    title={Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent},
    author={Xingwu Sun and Yanfeng Chen and Yiqing Huang and Ruobing Xie and Jiaqi Zhu and Kai Zhang and Shuaipeng Li and Zhen Yang and Jonny Han and Xiaobo Shu and Jiahao Bu and Zhongzhi Chen and Xuemeng Huang and Fengzong Lian and Saiyong Yang and Jianfeng Yan and Yuyuan Zeng and Xiaoqin Ren and Chao Yu and Lulu Wu and Yue Mao and Tao Yang and Suncong Zheng and Kan Wu and Dian Jiao and Jinbao Xue and Xipeng Zhang and Decheng Wu and Kai Liu and Dengpeng Wu and Guanghui Xu and Shaohua Chen and Shuang Chen and Xiao Feng and Yigeng Hong and Junqiang Zheng and Chengcheng Xu and Zongwei Li and Xiong Kuang and Jianglu Hu and Yiqi Chen and Yuchi Deng and Guiyang Li and Ao Liu and Chenchen Zhang and Shihui Hu and Zilong Zhao and Zifan Wu and Yao Ding and Weichao Wang and Han Liu and Roberts Wang and Hao Fei and Peijie She and Ze Zhao and Xun Cao and Hai Wang and Fusheng Xiang and Mengyuan Huang and Zhiyuan Xiong and Bin Hu and Xuebin Hou and Lei Jiang and Jiajia Wu and Yaping Deng and Yi Shen and Qian Wang and Weijie Liu and Jie Liu and Meng Chen and Liang Dong and Weiwen Jia and Hu Chen and Feifei Liu and Rui Yuan and Huilin Xu and Zhenxiang Yan and Tengfei Cao and Zhichao Hu and Xinhua Feng and Dong Du and Tinghao She and Yangyu Tao and Feng Zhang and Jianchen Zhu and Chengzhong Xu and Xirui Li and Chong Zha and Wen Ouyang and Yinben Xia and Xiang Li and Zekun He and Rongpeng Chen and Jiawei Song and Ruibin Chen and Fan Jiang and Chongqing Zhao and Bo Wang and Hao Gong and Rong Gan and Winston Hu and Zhanhui Kang and Yong Yang and Yuhong Liu and Di Wang and Jie Jiang},
    year={2024},
    eprint={2411.02265},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2411.02265}
}
Downloads last month
7
Safetensors
Model size
1B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support