evalstate
commited on
Commit
·
59eece6
1
Parent(s):
1685a6c
update dataset validation instructions, remove unused train styles
Browse files- trl/SKILL.md +18 -18
- trl/references/training_methods.md +4 -3
- trl/references/troubleshooting.md +9 -2
- trl/scripts/validate_dataset.py +0 -175
trl/SKILL.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
---
|
| 2 |
name: trl
|
| 3 |
-
description: This skill should be used when users want to train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO
|
| 4 |
license: Complete terms in LICENSE.txt
|
| 5 |
---
|
| 6 |
|
|
@@ -14,9 +14,7 @@ Train language models using TRL (Transformer Reinforcement Learning) on fully ma
|
|
| 14 |
- **SFT** (Supervised Fine-Tuning) - Standard instruction tuning
|
| 15 |
- **DPO** (Direct Preference Optimization) - Alignment from preference data
|
| 16 |
- **GRPO** (Group Relative Policy Optimization) - Online RL training
|
| 17 |
-
- **KTO** (Kahneman-Tversky Optimization) - Preference tuning without paired data
|
| 18 |
- **Reward Modeling** - Train reward models for RLHF
|
| 19 |
-
- **PPO** (Proximal Policy Optimization) - Classic RLHF method
|
| 20 |
|
| 21 |
**For detailed TRL method documentation:**
|
| 22 |
```python
|
|
@@ -32,7 +30,7 @@ hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer") # DPO
|
|
| 32 |
|
| 33 |
Use this skill when users want to:
|
| 34 |
- Fine-tune language models on cloud GPUs without local infrastructure
|
| 35 |
-
- Train with TRL methods (SFT, DPO, GRPO,
|
| 36 |
- Run training jobs on Hugging Face Jobs infrastructure
|
| 37 |
- Convert trained models to GGUF for local deployment (Ollama, LM Studio, llama.cpp)
|
| 38 |
- Ensure trained models are permanently saved to the Hub
|
|
@@ -52,7 +50,7 @@ When assisting with training jobs:
|
|
| 52 |
|
| 53 |
## Local Script Dependencies
|
| 54 |
|
| 55 |
-
To run scripts locally (like `
|
| 56 |
```bash
|
| 57 |
pip install -r requirements.txt
|
| 58 |
```
|
|
@@ -63,7 +61,7 @@ Before starting any training job, verify:
|
|
| 63 |
|
| 64 |
### ✅ **Account & Authentication**
|
| 65 |
- Hugging Face Account with [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan (Jobs require paid plan)
|
| 66 |
-
- Authenticated login: Check with `
|
| 67 |
- **HF_TOKEN for Hub Push** ⚠️ CRITICAL - Training environment is ephemeral, must push to Hub or ALL training results are lost
|
| 68 |
- Token must have write permissions and is automatically available as `$HF_TOKEN` in job secrets
|
| 69 |
|
|
@@ -389,6 +387,8 @@ hf_jobs("uv", {
|
|
| 389 |
})
|
| 390 |
```
|
| 391 |
|
|
|
|
|
|
|
| 392 |
### Reading Results
|
| 393 |
|
| 394 |
The output shows compatibility for each training method:
|
|
@@ -490,15 +490,13 @@ See `references/training_patterns.md` for detailed examples including:
|
|
| 490 |
### Dataset Misformatted
|
| 491 |
|
| 492 |
**Fix:**
|
| 493 |
-
1. Validate first
|
| 494 |
-
|
| 495 |
-
|
| 496 |
-
|
| 497 |
-
- GRPO: `prompt` only
|
| 498 |
-
3. Apply formatting if needed:
|
| 499 |
-
```python
|
| 500 |
-
dataset = dataset.map(lambda x: {"text": f"User: {x['input']}\nBot: {x['output']}"})
|
| 501 |
```
|
|
|
|
|
|
|
| 502 |
|
| 503 |
### Job Timeout
|
| 504 |
|
|
@@ -534,7 +532,7 @@ Add to PEP 723 header:
|
|
| 534 |
- Job times out → Increase timeout, reduce epochs/dataset, use smaller model/LoRA
|
| 535 |
- Model not saved to Hub → Check push_to_hub=True, hub_model_id, secrets=HF_TOKEN
|
| 536 |
- Out of Memory (OOM) → Reduce batch size, increase gradient accumulation, enable LoRA, use larger GPU
|
| 537 |
-
- Dataset format error →
|
| 538 |
- Import/module errors → Add PEP 723 header with dependencies, verify format
|
| 539 |
- Authentication errors → Check `mcp__huggingface__hf_whoami()`, token permissions, secrets parameter
|
| 540 |
|
|
@@ -556,10 +554,12 @@ Add to PEP 723 header:
|
|
| 556 |
- `scripts/train_sft_example.py` - Production SFT template
|
| 557 |
- `scripts/train_dpo_example.py` - Production DPO template
|
| 558 |
- `scripts/train_grpo_example.py` - Production GRPO template
|
| 559 |
-
- `scripts/validate_dataset.py` - Validate dataset format before training
|
| 560 |
- `scripts/estimate_cost.py` - Estimate time and cost (offer when appropriate)
|
| 561 |
- `scripts/convert_to_gguf.py` - Complete GGUF conversion script
|
| 562 |
|
|
|
|
|
|
|
|
|
|
| 563 |
### External Links
|
| 564 |
- [TRL Documentation](https://huggingface.co/docs/trl)
|
| 565 |
- [TRL Jobs Training Guide](https://huggingface.co/docs/trl/en/jobs_training)
|
|
@@ -578,6 +578,6 @@ Add to PEP 723 header:
|
|
| 578 |
5. **Include Trackio** - Use example scripts as templates for real-time monitoring
|
| 579 |
6. **Offer cost estimation** - When parameters are known, use `scripts/estimate_cost.py`
|
| 580 |
7. **Three approaches available:** TRL Jobs package (easiest), UV scripts (custom, modern), TRL maintained scripts (official examples)
|
| 581 |
-
8. **Use
|
| 582 |
-
9. **Validate dataset format** before training with
|
| 583 |
10. **Choose appropriate hardware** for model size; use LoRA for models >7B
|
|
|
|
| 1 |
---
|
| 2 |
name: trl
|
| 3 |
+
description: This skill should be used when users want to train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, and model persistence. Should be invoked for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.
|
| 4 |
license: Complete terms in LICENSE.txt
|
| 5 |
---
|
| 6 |
|
|
|
|
| 14 |
- **SFT** (Supervised Fine-Tuning) - Standard instruction tuning
|
| 15 |
- **DPO** (Direct Preference Optimization) - Alignment from preference data
|
| 16 |
- **GRPO** (Group Relative Policy Optimization) - Online RL training
|
|
|
|
| 17 |
- **Reward Modeling** - Train reward models for RLHF
|
|
|
|
| 18 |
|
| 19 |
**For detailed TRL method documentation:**
|
| 20 |
```python
|
|
|
|
| 30 |
|
| 31 |
Use this skill when users want to:
|
| 32 |
- Fine-tune language models on cloud GPUs without local infrastructure
|
| 33 |
+
- Train with TRL methods (SFT, DPO, GRPO, etc.)
|
| 34 |
- Run training jobs on Hugging Face Jobs infrastructure
|
| 35 |
- Convert trained models to GGUF for local deployment (Ollama, LM Studio, llama.cpp)
|
| 36 |
- Ensure trained models are permanently saved to the Hub
|
|
|
|
| 50 |
|
| 51 |
## Local Script Dependencies
|
| 52 |
|
| 53 |
+
To run scripts locally (like `estimate_cost.py`), install dependencies:
|
| 54 |
```bash
|
| 55 |
pip install -r requirements.txt
|
| 56 |
```
|
|
|
|
| 61 |
|
| 62 |
### ✅ **Account & Authentication**
|
| 63 |
- Hugging Face Account with [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan (Jobs require paid plan)
|
| 64 |
+
- Authenticated login: Check with `hf_whoami()`
|
| 65 |
- **HF_TOKEN for Hub Push** ⚠️ CRITICAL - Training environment is ephemeral, must push to Hub or ALL training results are lost
|
| 66 |
- Token must have write permissions and is automatically available as `$HF_TOKEN` in job secrets
|
| 67 |
|
|
|
|
| 387 |
})
|
| 388 |
```
|
| 389 |
|
| 390 |
+
The script is fast, and will usually complete synchronously.
|
| 391 |
+
|
| 392 |
### Reading Results
|
| 393 |
|
| 394 |
The output shows compatibility for each training method:
|
|
|
|
| 490 |
### Dataset Misformatted
|
| 491 |
|
| 492 |
**Fix:**
|
| 493 |
+
1. Validate first with dataset inspector:
|
| 494 |
+
```bash
|
| 495 |
+
uv run https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py \
|
| 496 |
+
--dataset name --split train
|
|
|
|
|
|
|
|
|
|
|
|
|
| 497 |
```
|
| 498 |
+
2. Check output for compatibility markers (✓ READY, ✗ NEEDS MAPPING, ✗ INCOMPATIBLE)
|
| 499 |
+
3. Apply mapping code from inspector output if needed
|
| 500 |
|
| 501 |
### Job Timeout
|
| 502 |
|
|
|
|
| 532 |
- Job times out → Increase timeout, reduce epochs/dataset, use smaller model/LoRA
|
| 533 |
- Model not saved to Hub → Check push_to_hub=True, hub_model_id, secrets=HF_TOKEN
|
| 534 |
- Out of Memory (OOM) → Reduce batch size, increase gradient accumulation, enable LoRA, use larger GPU
|
| 535 |
+
- Dataset format error → Validate with dataset inspector (see Dataset Validation section)
|
| 536 |
- Import/module errors → Add PEP 723 header with dependencies, verify format
|
| 537 |
- Authentication errors → Check `mcp__huggingface__hf_whoami()`, token permissions, secrets parameter
|
| 538 |
|
|
|
|
| 554 |
- `scripts/train_sft_example.py` - Production SFT template
|
| 555 |
- `scripts/train_dpo_example.py` - Production DPO template
|
| 556 |
- `scripts/train_grpo_example.py` - Production GRPO template
|
|
|
|
| 557 |
- `scripts/estimate_cost.py` - Estimate time and cost (offer when appropriate)
|
| 558 |
- `scripts/convert_to_gguf.py` - Complete GGUF conversion script
|
| 559 |
|
| 560 |
+
### External Scripts
|
| 561 |
+
- [Dataset Inspector](https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py) - Validate dataset format before training (use via `uv run` or `hf_jobs`)
|
| 562 |
+
|
| 563 |
### External Links
|
| 564 |
- [TRL Documentation](https://huggingface.co/docs/trl)
|
| 565 |
- [TRL Jobs Training Guide](https://huggingface.co/docs/trl/en/jobs_training)
|
|
|
|
| 578 |
5. **Include Trackio** - Use example scripts as templates for real-time monitoring
|
| 579 |
6. **Offer cost estimation** - When parameters are known, use `scripts/estimate_cost.py`
|
| 580 |
7. **Three approaches available:** TRL Jobs package (easiest), UV scripts (custom, modern), TRL maintained scripts (official examples)
|
| 581 |
+
8. **Use hf_doc_fetch/hf_doc_search** for latest TRL documentation
|
| 582 |
+
9. **Validate dataset format** before training with dataset inspector (see Dataset Validation section)
|
| 583 |
10. **Choose appropriate hardware** for model size; use LoRA for models >7B
|
trl/references/training_methods.md
CHANGED
|
@@ -166,8 +166,9 @@ hf_doc_fetch("https://huggingface.co/docs/trl/dataset_formats")
|
|
| 166 |
```
|
| 167 |
|
| 168 |
Or validate your dataset:
|
| 169 |
-
```
|
| 170 |
-
|
|
|
|
| 171 |
```
|
| 172 |
|
| 173 |
## See Also
|
|
@@ -175,4 +176,4 @@ Or validate your dataset:
|
|
| 175 |
- `references/training_patterns.md` - Common training patterns and examples
|
| 176 |
- `scripts/train_sft_example.py` - Complete SFT template
|
| 177 |
- `scripts/train_dpo_example.py` - Complete DPO template
|
| 178 |
-
-
|
|
|
|
| 166 |
```
|
| 167 |
|
| 168 |
Or validate your dataset:
|
| 169 |
+
```bash
|
| 170 |
+
uv run https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py \
|
| 171 |
+
--dataset your/dataset --split train
|
| 172 |
```
|
| 173 |
|
| 174 |
## See Also
|
|
|
|
| 176 |
- `references/training_patterns.md` - Common training patterns and examples
|
| 177 |
- `scripts/train_sft_example.py` - Complete SFT template
|
| 178 |
- `scripts/train_dpo_example.py` - Complete DPO template
|
| 179 |
+
- [Dataset Inspector](https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py) - Dataset format validation tool
|
trl/references/troubleshooting.md
CHANGED
|
@@ -103,8 +103,15 @@ trainer = SFTTrainer(
|
|
| 103 |
|
| 104 |
2. **Validate dataset before training:**
|
| 105 |
```bash
|
| 106 |
-
|
| 107 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
```
|
| 109 |
|
| 110 |
3. **Verify field names:**
|
|
|
|
| 103 |
|
| 104 |
2. **Validate dataset before training:**
|
| 105 |
```bash
|
| 106 |
+
uv run https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py \
|
| 107 |
+
--dataset <dataset-name> --split train
|
| 108 |
+
```
|
| 109 |
+
Or via hf_jobs:
|
| 110 |
+
```python
|
| 111 |
+
hf_jobs("uv", {
|
| 112 |
+
"script": "https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py",
|
| 113 |
+
"script_args": ["--dataset", "dataset-name", "--split", "train"]
|
| 114 |
+
})
|
| 115 |
```
|
| 116 |
|
| 117 |
3. **Verify field names:**
|
trl/scripts/validate_dataset.py
DELETED
|
@@ -1,175 +0,0 @@
|
|
| 1 |
-
#!/usr/bin/env python3
|
| 2 |
-
# /// script
|
| 3 |
-
# dependencies = [
|
| 4 |
-
# "datasets>=2.14.0",
|
| 5 |
-
# ]
|
| 6 |
-
# ///
|
| 7 |
-
"""
|
| 8 |
-
Validate dataset format for TRL training.
|
| 9 |
-
|
| 10 |
-
Usage:
|
| 11 |
-
python validate_dataset.py <dataset_name> <method>
|
| 12 |
-
|
| 13 |
-
Examples:
|
| 14 |
-
python validate_dataset.py trl-lib/Capybara sft
|
| 15 |
-
python validate_dataset.py Anthropic/hh-rlhf dpo
|
| 16 |
-
"""
|
| 17 |
-
|
| 18 |
-
import sys
|
| 19 |
-
from datasets import load_dataset
|
| 20 |
-
|
| 21 |
-
def validate_sft_dataset(dataset):
|
| 22 |
-
"""Validate SFT dataset format."""
|
| 23 |
-
print("🔍 Validating SFT dataset...")
|
| 24 |
-
|
| 25 |
-
# Check for common fields
|
| 26 |
-
columns = dataset.column_names
|
| 27 |
-
print(f"📋 Columns: {columns}")
|
| 28 |
-
|
| 29 |
-
has_messages = "messages" in columns
|
| 30 |
-
has_text = "text" in columns
|
| 31 |
-
|
| 32 |
-
if not (has_messages or has_text):
|
| 33 |
-
print("❌ Dataset must have 'messages' or 'text' field")
|
| 34 |
-
return False
|
| 35 |
-
|
| 36 |
-
# Check first example
|
| 37 |
-
example = dataset[0]
|
| 38 |
-
|
| 39 |
-
if has_messages:
|
| 40 |
-
messages = example["messages"]
|
| 41 |
-
if not isinstance(messages, list):
|
| 42 |
-
print("❌ 'messages' field must be a list")
|
| 43 |
-
return False
|
| 44 |
-
|
| 45 |
-
if len(messages) == 0:
|
| 46 |
-
print("❌ 'messages' field is empty")
|
| 47 |
-
return False
|
| 48 |
-
|
| 49 |
-
# Check message format
|
| 50 |
-
msg = messages[0]
|
| 51 |
-
if not isinstance(msg, dict):
|
| 52 |
-
print("❌ Messages must be dictionaries")
|
| 53 |
-
return False
|
| 54 |
-
|
| 55 |
-
if "role" not in msg or "content" not in msg:
|
| 56 |
-
print("❌ Messages must have 'role' and 'content' keys")
|
| 57 |
-
return False
|
| 58 |
-
|
| 59 |
-
print("✅ Messages format valid")
|
| 60 |
-
print(f" First message: {msg['role']}: {msg['content'][:50]}...")
|
| 61 |
-
|
| 62 |
-
if has_text:
|
| 63 |
-
text = example["text"]
|
| 64 |
-
if not isinstance(text, str):
|
| 65 |
-
print("❌ 'text' field must be a string")
|
| 66 |
-
return False
|
| 67 |
-
|
| 68 |
-
if len(text) == 0:
|
| 69 |
-
print("❌ 'text' field is empty")
|
| 70 |
-
return False
|
| 71 |
-
|
| 72 |
-
print("✅ Text format valid")
|
| 73 |
-
print(f" First text: {text[:100]}...")
|
| 74 |
-
|
| 75 |
-
return True
|
| 76 |
-
|
| 77 |
-
def validate_dpo_dataset(dataset):
|
| 78 |
-
"""Validate DPO dataset format."""
|
| 79 |
-
print("🔍 Validating DPO dataset...")
|
| 80 |
-
|
| 81 |
-
columns = dataset.column_names
|
| 82 |
-
print(f"📋 Columns: {columns}")
|
| 83 |
-
|
| 84 |
-
required = ["prompt", "chosen", "rejected"]
|
| 85 |
-
missing = [col for col in required if col not in columns]
|
| 86 |
-
|
| 87 |
-
if missing:
|
| 88 |
-
print(f"❌ Missing required fields: {missing}")
|
| 89 |
-
return False
|
| 90 |
-
|
| 91 |
-
# Check first example
|
| 92 |
-
example = dataset[0]
|
| 93 |
-
|
| 94 |
-
for field in required:
|
| 95 |
-
value = example[field]
|
| 96 |
-
if isinstance(value, str):
|
| 97 |
-
if len(value) == 0:
|
| 98 |
-
print(f"❌ '{field}' field is empty")
|
| 99 |
-
return False
|
| 100 |
-
print(f"✅ '{field}' format valid (string)")
|
| 101 |
-
elif isinstance(value, list):
|
| 102 |
-
if len(value) == 0:
|
| 103 |
-
print(f"❌ '{field}' field is empty")
|
| 104 |
-
return False
|
| 105 |
-
print(f"✅ '{field}' format valid (list of messages)")
|
| 106 |
-
else:
|
| 107 |
-
print(f"❌ '{field}' must be string or list")
|
| 108 |
-
return False
|
| 109 |
-
|
| 110 |
-
return True
|
| 111 |
-
|
| 112 |
-
def validate_kto_dataset(dataset):
|
| 113 |
-
"""Validate KTO dataset format."""
|
| 114 |
-
print("🔍 Validating KTO dataset...")
|
| 115 |
-
|
| 116 |
-
columns = dataset.column_names
|
| 117 |
-
print(f"📋 Columns: {columns}")
|
| 118 |
-
|
| 119 |
-
required = ["prompt", "completion", "label"]
|
| 120 |
-
missing = [col for col in required if col not in columns]
|
| 121 |
-
|
| 122 |
-
if missing:
|
| 123 |
-
print(f"❌ Missing required fields: {missing}")
|
| 124 |
-
return False
|
| 125 |
-
|
| 126 |
-
# Check first example
|
| 127 |
-
example = dataset[0]
|
| 128 |
-
|
| 129 |
-
if not isinstance(example["label"], bool):
|
| 130 |
-
print("❌ 'label' field must be boolean")
|
| 131 |
-
return False
|
| 132 |
-
|
| 133 |
-
print("✅ KTO format valid")
|
| 134 |
-
return True
|
| 135 |
-
|
| 136 |
-
def main():
|
| 137 |
-
if len(sys.argv) != 3:
|
| 138 |
-
print("Usage: python validate_dataset.py <dataset_name> <method>")
|
| 139 |
-
print("Methods: sft, dpo, kto")
|
| 140 |
-
sys.exit(1)
|
| 141 |
-
|
| 142 |
-
dataset_name = sys.argv[1]
|
| 143 |
-
method = sys.argv[2].lower()
|
| 144 |
-
|
| 145 |
-
print(f"📦 Loading dataset: {dataset_name}")
|
| 146 |
-
try:
|
| 147 |
-
dataset = load_dataset(dataset_name, split="train")
|
| 148 |
-
print(f"✅ Dataset loaded: {len(dataset)} examples")
|
| 149 |
-
except Exception as e:
|
| 150 |
-
print(f"❌ Failed to load dataset: {e}")
|
| 151 |
-
sys.exit(1)
|
| 152 |
-
|
| 153 |
-
validators = {
|
| 154 |
-
"sft": validate_sft_dataset,
|
| 155 |
-
"dpo": validate_dpo_dataset,
|
| 156 |
-
"kto": validate_kto_dataset,
|
| 157 |
-
}
|
| 158 |
-
|
| 159 |
-
if method not in validators:
|
| 160 |
-
print(f"❌ Unknown method: {method}")
|
| 161 |
-
print(f"Supported methods: {list(validators.keys())}")
|
| 162 |
-
sys.exit(1)
|
| 163 |
-
|
| 164 |
-
validator = validators[method]
|
| 165 |
-
valid = validator(dataset)
|
| 166 |
-
|
| 167 |
-
if valid:
|
| 168 |
-
print(f"\n✅ Dataset is valid for {method.upper()} training")
|
| 169 |
-
sys.exit(0)
|
| 170 |
-
else:
|
| 171 |
-
print(f"\n❌ Dataset is NOT valid for {method.upper()} training")
|
| 172 |
-
sys.exit(1)
|
| 173 |
-
|
| 174 |
-
if __name__ == "__main__":
|
| 175 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|