Diff Interpretation Tuning
ttw commited on
Commit
e47f102
·
verified ·
1 Parent(s): e1dd5c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -2,4 +2,20 @@
2
  license: mit
3
  ---
4
 
5
- In order to play around with the weight diffs and DIT adapters, please check out the [Google Colab demo notebook](https://colab.research.google.com/drive/12YD_9GRT-y_hFOBqXzyI4eN_lJGKiXwN?usp=sharing).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  ---
4
 
5
+ This repository contains the weight diffs and DIT adapters used in the paper [Learning to Interpret Weight Differences in Language Models (Goel et al. 2025)](https://arxiv.org/abs/2510.05092).
6
+ This paper introduces *Diff Interpretation Tuning*, a method that trains a LoRA adapter than can be applied to a model to get it to describe its own finetuning induced modifications.
7
+
8
+ To play around with the weight diffs and DIT adapters from the paper, please check out our [Google Colab demo notebook](https://colab.research.google.com/drive/12YD_9GRT-y_hFOBqXzyI4eN_lJGKiXwN?usp=sharing).
9
+ The code used to train and evaluate the weight diffs and DIT adapters can be found on at [github.com/Aviously/diff-interpretation-tuning](https://github.com/Aviously/diff-interpretation-tuning).
10
+
11
+ You can cite our work using the following bibtex
12
+ ```
13
+ @misc{goel2025learninginterpretweightdifferences,
14
+ title={Learning to Interpret Weight Differences in Language Models},
15
+ author={Avichal Goel and Yoon Kim and Nir Shavit and Tony T. Wang},
16
+ year={2025},
17
+ eprint={2510.05092},
18
+ archivePrefix={arXiv},
19
+ url={https://arxiv.org/abs/2510.05092},
20
+ }
21
+ ```