Beebey commited on
Commit
716baf6
·
verified ·
1 Parent(s): 2b66fa5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +170 -0
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - code
6
+ library_name: transformers
7
+ tags:
8
+ - smallcoder
9
+ - code-llm
10
+ - sft
11
+ - 303m
12
+ - trc
13
+ datasets:
14
+ - HuggingFaceFW/fineweb-edu
15
+ - nvidia/Nemotron-Pretraining-SFT-v1
16
+ - bigcode/starcoderdata
17
+ - nvidia/Nemotron-Pretraining-Code-v1
18
+ - HuggingFaceFW/finewiki
19
+ - open-web-math/open-web-math
20
+ - nvidia/Nemotron-CC-Math-v1
21
+ - nvidia/OpenCodeInstruct
22
+ - nvidia/OpenMathInstruct-2
23
+ ---
24
+
25
+ # SmallCoder V2 (303M)
26
+
27
+ SmallCoder V2 is a **303 Million parameter** Large Language Model (LLM) trained from scratch, specializing in code generation and algorithmic reasoning.
28
+
29
+ This checkpoint is the result of a 6 Billion token Supervised Fine-Tuning (SFT) run, which **fixed a critical End-of-Sequence (EOS) token bug** present in previous versions.
30
+
31
+ This model demonstrates state-of-the-art (SOTA) coding performance for its size, outperforming models larger than 1B parameters and competing with models 23x its size.
32
+
33
+ **Trained with support from Google's TPU Research Cloud (TRC) program.**
34
+
35
+ ## 🚀 Key Performance (Benchmarks)
36
+
37
+ The goal of SmallCoder V2 was to maximize coding performance in a compact (<500M) package. This model achieves SOTA scores that rival or exceed models in the 1B+ class.
38
+
39
+ | Model | Size | HumanEval (pass@1) | MBPP (pass@1) |
40
+ | :--- | :---: | :---: | :---: |
41
+ | **SmallCoder V2 (S4.1)** | **303M** | **27.4%** | **31.0%** |
42
+ | TinyLlama-1.1B | 1.1B | ~26.4% | ~27.6% |
43
+ | MPT-1B-Instruct | 1.0B | ~22.0% | ~25.0% |
44
+ | Zephyr-1.3B SFT | 1.3B | 31.0% | 34.0% |
45
+ | Mistral-7B Base | 7B | 30.5% | 47.5% |
46
+
47
+ SmallCoder V2 (303M) nearly achieves **parity with Mistral 7B** on HumanEval while being **23x smaller**.
48
+
49
+ ## 🧠 Model Architecture
50
+
51
+ This model uses a Llama-type architecture (MHA) with 303M parameters.
52
+
53
+ * **Architecture**: LlamaForCausalLM (MHA)
54
+ * **Hidden Size**: 768
55
+ * **Layers**: 24
56
+ * **Attention Heads**: 8
57
+ * **KV Heads**: 8 (Standard MHA)
58
+ * **Vocab Size**: 49152 (Tokenizer: `bigcode/starcoder`)
59
+ * **Max Context**: 1024 tokens
60
+
61
+ ```python
62
+ LlamaConfig(
63
+ vocab_size=49152,
64
+ hidden_size=768,
65
+ num_hidden_layers=24,
66
+ intermediate_size=3072,
67
+ num_attention_heads=8,
68
+ num_key_value_heads=8,
69
+ max_position_embeddings=1024,
70
+ ...
71
+ )
72
+ ````
73
+
74
+ ## 🛠️ Training Plan (4 Stages)
75
+
76
+ This model is the result of a multi-stage training curriculum totaling **29.8 Billion tokens**.
77
+
78
+ ### Stage 1: Linguistic Base (Completed)
79
+
80
+ * **Tokens**: 6.3B
81
+ * **Dataset**: `FineWeb-Edu`
82
+ * **Objective**: Learn natural language.
83
+ * **Loss**: 10.87 → **2.58**
84
+
85
+ ### Stage 2: Code Specialization (Completed)
86
+
87
+ * **Tokens**: 7.5B
88
+ * **Dataset**: `Nemotron Synthetic Code Q/A CoT` (60%) / `StarCoderData` (40%)
89
+ * **Objective**: Learn code syntax and reasoning.
90
+ * **Loss**: 5.00 → **1.25**
91
+
92
+ ### Stage 3: Math & Knowledge (Completed)
93
+
94
+ * **Tokens**: 10B
95
+ * **Dataset**: `Nemotron CC-Math-4plus` (40%) / `FineWiki-EN` (35%) / `Nemotron CC-Math-4` (15%) / `OpenWebMath` (10%)
96
+ * **Objective**: Learn mathematical reasoning.
97
+ * **Loss**: 2.77 → **1.55**
98
+ * **Result**: A solid base model (Wikitext PPL: 35.4).
99
+
100
+ ### Stage 4.1: SFT (EOS-Fixed) (Completed)
101
+
102
+ * **Tokens**: 6B
103
+ * **Starting Checkpoint**: `stage-3/`
104
+ * **Dataset**: `Nemotron-SFT-Code` (45%), `OpenCodeInstruct` (30%), `OpenMathInstruct-2` (15%), `Nemotron-SFT-General` (10%)
105
+ * **Objective**: Align on code instructions and fix the EOS generation bug.
106
+ * **Loss**: 1.73 → **\~0.70** (low point)
107
+
108
+ -----
109
+
110
+ ## 📊 Detailed Benchmarks (Stage 4.1)
111
+
112
+ The SFT (Code) scores are excellent. The generalist scores (Math, Reasoning) are low, indicating the SFT has heavily specialized the model (a "code specialist").
113
+
114
+ | Task | Benchmark | n-shot | Metric | Score |
115
+ | :--- | :--- | :---: | :--- | :---: |
116
+ | **Code** | **HumanEval** | 0 | **pass@1** | **27.4%** |
117
+ | **Code** | **MBPP** | 3 | **pass@1** | **31.0%** |
118
+ | **Math** | **GSM8k** | 0 | exact\_match | **4.55%** |
119
+ | **General** | **Wikitext** | 0 | word\_perplexity | 167.6 |
120
+ | **Reasoning** | **ARC Easy** | 0 | acc\_norm | 34.6% |
121
+ | **Reasoning** | **ARC Challenge** | 0 | acc\_norm | 22.8% |
122
+ | **Commonsense** | **HellaSwag** | 0 | acc\_norm | 28.3% |
123
+
124
+ *`humaneval`/`mbpp` scores are based on manual analysis (`max_gen_toks=512`), as official `lm-eval` benchmarks fail to evaluate this model due to SFT formatting and truncation issues.*
125
+
126
+ ## ⚠️ Known Limitations
127
+
128
+ 1. **Code Specialist:** Heavily optimized for code (27.4% HEval) at the expense of other skills. Performance on math (`gsm8k` 4.55%) and general knowledge (PPL 167) is low. **This is a code specialist model, not a generalist.**
129
+ 2. **Limited Context:** This model was trained exclusively on a sequence length of **1024 tokens**. It cannot handle longer prompts.
130
+
131
+ ## ⚡ How to Use
132
+
133
+ ```python
134
+ import torch
135
+ from transformers import AutoTokenizer, AutoModelForCausalLM
136
+
137
+ model_id = "ilanbeebey/smallcoder-303m"
138
+ device = "cuda" # or "cpu"
139
+
140
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
141
+ model = AutoModelForCausalLM.from_pretrained(
142
+ model_id,
143
+ torch_dtype=torch.bfloat16
144
+ ).to(device)
145
+
146
+ # Note the 'User:' and 'Assistant:' formatting
147
+ prompt = "User: Write a Python function to compute the Fibonacci sequence.\nAssistant:"
148
+ inputs = tokenizer(prompt, return_tensors="pt").to(device)
149
+
150
+ # Generation
151
+ # The model was trained to use tokenizer.eos_token_id
152
+ # It should stop automatically.
153
+ outputs = model.generate(
154
+ **inputs,
155
+ max_new_tokens=512,
156
+ pad_token_id=tokenizer.eos_token_id,
157
+ eos_token_id=tokenizer.eos_token_id
158
+ )
159
+
160
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
161
+ print(response)
162
+ ```
163
+
164
+ ## Acknowledgements
165
+
166
+ ### Trained with the Google TRC
167
+
168
+ This model was trained with support from Google's **TPU Research Cloud (TRC)** program. We thank Google for providing access to the TPU v4 infrastructure that made this training run possible.
169
+
170
+ ```