Model Details

Best Meta-Llama-3-8B-Instruct checkpoint unlearned using RR with the Textbook-HP forget set. For more details, please check our paper.

sources

Performance

HP MCQ tinyMMLU GSM8k TriviaQA
Llama-3-8B-Instruct 77.80 59.21 75.28 51.09
Llama-3-8B-Instruct_RR_Textbook-HP 25.42 59.65 75.59 51.74

Citation

If you find this useful in your research, please consider citing our paper:

@misc{zhu2025llmunlearningexpertcurated,
      title={LLM Unlearning Without an Expert Curated Dataset}, 
      author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger},
      year={2025},
      eprint={2508.06595},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.06595}, 
}
Downloads last month
10
Safetensors
Model size
10B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including WhyTheMoon/Llama-3-8B-Instruct_RR_Textbook-HP