Gemma 3 4B IT β GGUF (Q5_K_M)
- Derived from
google/gemma-3-4b-it. Modified: quantized to GGUF (Q5_K_M) using llama.cpp (commit fd62188). - See NOTICE for license/usage terms.
Files
gemma3-4b-it.Q5_K_M.ggufβ text-only quantizationgemma3-4b-it-mmproj.ggufβ vision projector (optional, not quantized)Modelfile
How to use (Ollama - text-generation only)
ollama run hf.co/nkamiy/gemma3-4b-it-gguf:gemma3-4b-it.Q5_K_M.gguf
How to use (Ollama - image text to text)
- Download the gguf files
gemma3-4b-it.Q5_K_M.ggufandgemma3-4b-it-mmproj.gguf. And also download Modelfile. Put them in one folder - cd to the folder
- Run the command as follows:
ollama create gemma3-4b-q5km -f Modelfile
- Downloads last month
- 5
Hardware compatibility
Log In
to view the estimation
5-bit