hacking-instruct-exp-72B
This is a experimental fine-tuned version of Qwen2.5-72B-Instruct on cybersecurity datasets.
Model Details
- Base Model: Qwen/Qwen2.5-72B-Instruct
- Fine-tuning Method: Supervised Fine-Tuning (SFT)
- Training Data: Cybersecurity instruction datasets
- Checkpoint: checkpoint-2424
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ccss17/hacking-instruct-exp-72B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support