| language: | |
| - en | |
| - id | |
| tags: | |
| - qwen | |
| - code | |
| - merged | |
| - optimized | |
| pipeline_tag: text-generation | |
| license: apache-2.0 | |
| # Qwen2.5 Coder 1.5B Instruct Merged (Optimized) | |
| This is an optimized merged version of the fine-tuned Qwen2.5 Coder model. It combines: | |
| - Base model: Qwen/Qwen2.5-Coder-1.5B-Instruct | |
| - Fine-tuned adapter: iamgiven/Qwen2.5-Coder-1.5B-Instruct-cpp-lora | |
| The model has been optimized using float16 precision and efficient serialization. | |
| ## Usage | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| import torch | |
| model = AutoModelForCausalLM.from_pretrained( | |
| "{full_repo_name}", | |
| trust_remote_code=True, | |
| torch_dtype=torch.float16 # Use float16 for efficiency | |
| ) | |
| tokenizer = AutoTokenizer.from_pretrained("{full_repo_name}", trust_remote_code=True) | |
| ``` | |