ItCodinTime commited on
Commit
9bcc0b4
·
verified ·
1 Parent(s): 4d86849

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +118 -3
README.md CHANGED
@@ -1,3 +1,118 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - tensorflow
5
+ - optimizer
6
+ - deep-learning
7
+ - machine-learning
8
+ - adaptive-learning-rate
9
+ library_name: tensorflow
10
+ ---
11
+
12
+ # NEAT Optimizer
13
+
14
+ **NEAT (Noise-Enhanced Adaptive Training)** is a novel optimization algorithm for deep learning that combines adaptive learning rates with controlled noise injection to improve convergence and generalization.
15
+
16
+ ## Overview
17
+
18
+ The NEAT optimizer enhances traditional adaptive optimization methods by intelligently injecting noise into the gradient updates. This approach helps:
19
+ - Escape local minima more effectively
20
+ - Improve generalization performance
21
+ - Achieve faster and more stable convergence
22
+ - Reduce overfitting on training data
23
+
24
+ ## Installation
25
+
26
+ ### From PyPI (recommended)
27
+ ```bash
28
+ pip install neat-optimizer
29
+ ```
30
+
31
+ ### From Source
32
+ ```bash
33
+ git clone https://github.com/yourusername/neat-optimizer.git
34
+ cd neat-optimizer
35
+ pip install -e .
36
+ ```
37
+
38
+ ## Quick Start
39
+
40
+ ```python
41
+ import tensorflow as tf
42
+ from neat_optimizer import NEATOptimizer
43
+
44
+ # Create your model
45
+ model = tf.keras.Sequential([
46
+ tf.keras.layers.Dense(128, activation='relu'),
47
+ tf.keras.layers.Dense(10, activation='softmax')
48
+ ])
49
+
50
+ # Use NEAT optimizer
51
+ optimizer = NEATOptimizer(
52
+ learning_rate=0.001,
53
+ noise_scale=0.01,
54
+ beta_1=0.9,
55
+ beta_2=0.999
56
+ )
57
+
58
+ # Compile and train
59
+ model.compile(
60
+ optimizer=optimizer,
61
+ loss='sparse_categorical_crossentropy',
62
+ metrics=['accuracy']
63
+ )
64
+
65
+ model.fit(x_train, y_train, epochs=10, validation_data=(x_val, y_val))
66
+ ```
67
+
68
+ ## Key Features
69
+
70
+ - **Adaptive Learning Rates**: Automatically adjusts learning rates per parameter
71
+ - **Noise Injection**: Controlled stochastic perturbations for better exploration
72
+ - **TensorFlow Integration**: Drop-in replacement for standard TensorFlow optimizers
73
+ - **Hyperparameter Flexibility**: Customizable noise schedules and adaptation rates
74
+
75
+ ## Parameters
76
+
77
+ - `learning_rate` (float, default=0.001): Initial learning rate
78
+ - `noise_scale` (float, default=0.01): Scale of noise injection
79
+ - `beta_1` (float, default=0.9): Exponential decay rate for first moment estimates
80
+ - `beta_2` (float, default=0.999): Exponential decay rate for second moment estimates
81
+ - `epsilon` (float, default=1e-7): Small constant for numerical stability
82
+ - `noise_decay` (float, default=0.99): Decay rate for noise scale over time
83
+
84
+ ## Requirements
85
+
86
+ - Python >= 3.7
87
+ - TensorFlow >= 2.4.0
88
+ - NumPy >= 1.19.0
89
+
90
+ ## Citation
91
+
92
+ If you use NEAT optimizer in your research, please cite:
93
+
94
+ ```bibtex
95
+ @software{neat_optimizer,
96
+ title={NEAT: Noise-Enhanced Adaptive Training Optimizer},
97
+ author={Your Name},
98
+ year={2025},
99
+ url={https://github.com/yourusername/neat-optimizer}
100
+ }
101
+ ```
102
+
103
+ ## References
104
+
105
+ - Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
106
+ - Neelakantan, A., et al. (2015). Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807.
107
+
108
+ ## License
109
+
110
+ This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
111
+
112
+ ## Contributing
113
+
114
+ Contributions are welcome! Please feel free to submit a Pull Request.
115
+
116
+ ## Support
117
+
118
+ For issues, questions, or feature requests, please open an issue on GitHub.