yihao-llm-7b (LoRA merged, zh-oriented)

用途:中文对话/问答/业务助手。
基座:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B。
训练:LoRA/QLoRA,max_seq_len=512(示例数据仅演示)。
限制:可能产生不准确内容,请结合业务规则与审核使用。
合规:对外展示处应显式标注“由AI生成”,并做好日志留存与处置。

快速使用

from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("yihao-ai/yihao-llm-7b")
m = AutoModelForCausalLM.from_pretrained("yihao-ai/yihao-llm-7b", torch_dtype="auto", device_map="auto")
Downloads last month
31
Safetensors
Model size
8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yihao-ai/yihao-llm-7b

Finetuned
(229)
this model
Quantizations
2 models