Beebey commited on
Commit
a1acc74
·
verified ·
1 Parent(s): 43c2758

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -22,9 +22,9 @@ datasets:
22
  - nvidia/OpenMathInstruct-2
23
  ---
24
 
25
- # SmallCoder V2 (303M)
26
 
27
- SmallCoder V2 is a **303 Million parameter** Large Language Model (LLM) trained from scratch, specializing in code generation and algorithmic reasoning.
28
 
29
  This checkpoint is the result of a 6 Billion token Supervised Fine-Tuning (SFT) run, which **fixed a critical End-of-Sequence (EOS) token bug** present in previous versions.
30
 
@@ -34,17 +34,17 @@ This model demonstrates state-of-the-art (SOTA) coding performance for its size,
34
 
35
  ## 🚀 Key Performance (Benchmarks)
36
 
37
- The goal of SmallCoder V2 was to maximize coding performance in a compact (<500M) package. This model achieves SOTA scores that rival or exceed models in the 1B+ class.
38
 
39
  | Model | Size | HumanEval (pass@1) | MBPP (pass@1) |
40
  | :--- | :---: | :---: | :---: |
41
- | **SmallCoder V2 (S4.1)** | **303M** | **27.4%** | **31.0%** |
42
  | TinyLlama-1.1B | 1.1B | ~26.4% | ~27.6% |
43
  | MPT-1B-Instruct | 1.0B | ~22.0% | ~25.0% |
44
  | Zephyr-1.3B SFT | 1.3B | 31.0% | 34.0% |
45
  | Mistral-7B Base | 7B | 30.5% | 47.5% |
46
 
47
- SmallCoder V2 (303M) nearly achieves **parity with Mistral 7B** on HumanEval while being **23x smaller**.
48
 
49
  ## 🧠 Model Architecture
50
 
 
22
  - nvidia/OpenMathInstruct-2
23
  ---
24
 
25
+ # SmallCoder (303M)
26
 
27
+ SmallCoder is a **303 Million parameter** Large Language Model (LLM) trained from scratch, specializing in code generation and algorithmic reasoning.
28
 
29
  This checkpoint is the result of a 6 Billion token Supervised Fine-Tuning (SFT) run, which **fixed a critical End-of-Sequence (EOS) token bug** present in previous versions.
30
 
 
34
 
35
  ## 🚀 Key Performance (Benchmarks)
36
 
37
+ The goal of SmallCoder was to maximize coding performance in a compact (<500M) package. This model achieves SOTA scores that rival or exceed models in the 1B+ class.
38
 
39
  | Model | Size | HumanEval (pass@1) | MBPP (pass@1) |
40
  | :--- | :---: | :---: | :---: |
41
+ | **SmallCoder (S4.1)** | **303M** | **27.4%** | **31.0%** |
42
  | TinyLlama-1.1B | 1.1B | ~26.4% | ~27.6% |
43
  | MPT-1B-Instruct | 1.0B | ~22.0% | ~25.0% |
44
  | Zephyr-1.3B SFT | 1.3B | 31.0% | 34.0% |
45
  | Mistral-7B Base | 7B | 30.5% | 47.5% |
46
 
47
+ SmallCoder (303M) nearly achieves **parity with Mistral 7B** on HumanEval while being **23x smaller**.
48
 
49
  ## 🧠 Model Architecture
50