MOPO Labs presents what to our knowledge is the first quantized GLM-4.5 with a 3-bit MLX format quantization that allows the original 355gb model to work on a 192GB Mac using LM Studio and other MLX compatible tools. Note this is a research build intended to learn more about the differences between the GLM-4.5 quantized lower versus the GLM-4.5 Air model; If you discover good tips, kindly share with monir@mopo.life or on huggingface.

Downloads last month
32
Safetensors
Model size
353B params
Tensor type
BF16
U32
F32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for monirmamoun/GLM-4.5-MLX-3bit

Base model

zai-org/GLM-4.5
Quantized
(30)
this model