LFM2β350M β’ Quantized Version (GGUF)
Quantized GGUF version of the LiquidAI/LFM2-700M model.
- β
Format:
GGUF - β
Use with:
liquid_llama.cpp - β
Supported precisions:
Q4_0,Q4_K, etc.
Download
wget https://huggingface.co/yasserrmd/LFM2-700M-gguf/resolve/main/lfm2-700m.Q4_K.gguf
(Adjust filename for other quant formats like Q4_0, if available.)
Notes
- Only compatible with
liquid_llama.cpp(notllama.cpp). - Replace
Q4_Kwith your chosen quant version.
- Downloads last month
- 98
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
32-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for yasserrmd/LFM2-700M-gguf
Base model
LiquidAI/LFM2-700M