4bit Quantization Failure for gemma 4b
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True
)
self.language_model = AutoModelForCausalLM.from_pretrained(
config.text_config.model_path,
trust_remote_code=True,
device_map= device_map,
config = self.text_config,
attn_implementation= "flash_attention_2",
**kwargs ) Fine tuning is failing with assertion failure assert module.weight.shape[1] == 1 in fix_4bit_weight_quant_state_from_module definiton of bitsandbytes . Tried updating transformers/ bitsandbytes and accelerate to latest version but had no luck. Current package versions : transformers==4.54.1 peft==0.17.0 bitsandbytes==0.47.0 accelerate==1.9.0. PS. gemma12b seems to be fine.
Did anyone face this issue before ?
Hi @shabha7092 ,
Welcome to Google's Gemma family of open source models, thanks for reaching out to us. I have done the 4 bit (int4) quantization for the google/gemma-3-4b-it model and also done the quantization of the model by using above mentioned your bnb parameters. I'm not facing any kind of difficultly or issues with these quantizations. Please find the attached gist file for your reference.
Thanks.
If you havent noticed in the above code snippet i am trying to load only the Language Model component of gemma-3-4b-it using AutoModelForCausalLM. from_pretrained. I am not using Gemma3ForConditionalGeneration to load the whole model. Let me take a step back and explain you what i am doing: step 1) Load the gemma-3-4b-it using the Gemma3ForConditionalGeneration 2) Extract the language model model = Gemma3ForConditionalGeneration.from_pretrained("google/gemma-3-4b-it", device_map=None) language_model = model.language_model language_model.save_pretrained('/language_model') 3) When i load back this saved language model using AutoModelForCausalLM.from_pretrained and bnb config i am facing the above mentioned error. Note that i dint have face anycissues doing the same thing for google/gemma-3-12b-it and i was succesufly able to fine tune. P.S upgrading the packages to latest version also dint help.
Any update on this ?