Model Details

Best Meta-Llama-3-8B-Instruct checkpoint unlearned using RMU with the Textbook-Cyber forget set. For more details, please check our paper.

sources

Performance

WMDP-Cyber tinyMMLU GSM8k TriviaQA
Llama-3-8B-Instruct 46.80 59.21 75.28 51.09
Llama-3-8B-Instruct_RMU_Textbook-Cyber 26.47 51.43 72.70 50.64

Citation

If you find this useful in your research, please consider citing our paper:

@misc{zhu2025llmunlearningexpertcurated,
      title={LLM Unlearning Without an Expert Curated Dataset}, 
      author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger},
      year={2025},
      eprint={2508.06595},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.06595}, 
}
Downloads last month
41
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including WhyTheMoon/Llama-3-8B-Instruct_RMU_Textbook-Cyber