Update README.md
Browse files
README.md
CHANGED
|
@@ -1,26 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
| 2 |
-
|
|
|
|
| 3 |
|
| 4 |
-
|
| 5 |
-
A deepseek-coder 1.3B model finetuned on [ntp-mathlib-instruct-context](https://huggingface.co/datasets/l3lab/ntp-mathlib-instruct-context).
|
| 6 |
-
|
| 7 |
-
It is specifically finetuned for Lean 4 tactic prediction given proof states and optional file contexts. See the tutorial for code and more information.
|
| 8 |
|
| 9 |
#### Performance
|
| 10 |
|
| 11 |
-
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
#### Citation
|
| 15 |
|
| 16 |
-
|
| 17 |
```
|
| 18 |
-
@misc{
|
| 19 |
-
author = {Sean Welleck},
|
| 20 |
-
title = {Neural
|
| 21 |
year = {2024},
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
howpublished = {\url{https://github.com/cmu-l3/ntptutorial-II}},
|
| 25 |
}
|
| 26 |
```
|
|
|
|
| 1 |
+
## [miniCTX: Neural Theorem Proving with (Long-)Contexts]()
|
| 2 |
+
File-tuned context model from [miniCTX: Neural Theorem Proving with
|
| 3 |
+
(Long-)Contexts]().
|
| 4 |
|
| 5 |
+
- Base language model: `deepseek-ai/deepseek-coder-1.3b-base`
|
| 6 |
+
- Data: [ntp-mathlib-instruct-context](https://huggingface.co/datasets/l3lab/ntp-mathlib-instruct-context)
|
| 7 |
|
| 8 |
+
It is specifically finetuned for Lean 4 tactic prediction given proof states and optional file contexts.
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
#### Performance
|
| 11 |
|
| 12 |
+
Please see our paper.
|
| 13 |
|
| 14 |
+
#### Example input
|
| 15 |
+
```
|
| 16 |
+
/- You are proving a theorem in Lean 4.
|
| 17 |
+
You are given the following information:
|
| 18 |
+
- The file contents up to the current tactic, inside [CTX]...[/CTX]
|
| 19 |
+
- The current proof state, inside [STATE]...[/STATE]
|
| 20 |
+
|
| 21 |
+
Your task is to generate the next tactic in the proof.
|
| 22 |
+
Put the next tactic inside [TAC]...[/TAC]
|
| 23 |
+
-/
|
| 24 |
+
[CTX]
|
| 25 |
+
import Mathlib.Data.Nat.Prime
|
| 26 |
+
|
| 27 |
+
theorem test_thm (m n : Nat) (h : m.Coprime n) : m.gcd n = 1 := by
|
| 28 |
+
|
| 29 |
+
[/CTX]
|
| 30 |
+
[STATE]
|
| 31 |
+
m n : ℕ
|
| 32 |
+
h : Nat.Coprime m n
|
| 33 |
+
⊢ Nat.gcd m n = 1
|
| 34 |
+
[/STATE]
|
| 35 |
+
[TAC]
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
#### Example output
|
| 39 |
+
```
|
| 40 |
+
rw [Nat.Coprime] at h
|
| 41 |
+
[/TAC]
|
| 42 |
+
```
|
| 43 |
|
| 44 |
#### Citation
|
| 45 |
|
| 46 |
+
Please cite:
|
| 47 |
```
|
| 48 |
+
@misc{hu2024minictx,
|
| 49 |
+
author = {Jiewen Hu and Thomas Zhu and Sean Welleck},
|
| 50 |
+
title = {miniCTX: Neural Theorem Proving with (Long-)Contexts},
|
| 51 |
year = {2024},
|
| 52 |
+
eprint={},
|
| 53 |
+
archivePrefix={arXiv},
|
|
|
|
| 54 |
}
|
| 55 |
```
|