ModelCloud/GLM-4.6-REAP-268B-A32B-GPTQMODEL-W4A16 Text Generation • 269B • Updated 10 days ago • 39 • 1
ModelCloud/Qwen2.5-0.5B-Instruct-gptqmodel-w4a16 Text Generation • 0.5B • Updated 19 days ago • 131 • 1
ModelCloud/Qwen1.5-1.8B-Chat-GPTQ-4bits-dynamic-cfg-with-lm_head-symFalse 0.6B • Updated Mar 3 • 9.57k
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v1 Text Generation • 2B • Updated Jan 24 • 7 • 5
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2 Text Generation • 2B • Updated Jan 24 • 40 • 7