File size: 3,184 Bytes
9bcc0b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
license: apache-2.0
tags:
- tensorflow
- optimizer
- deep-learning
- machine-learning
- adaptive-learning-rate
library_name: tensorflow
---

# NEAT Optimizer

**NEAT (Noise-Enhanced Adaptive Training)** is a novel optimization algorithm for deep learning that combines adaptive learning rates with controlled noise injection to improve convergence and generalization.

## Overview

The NEAT optimizer enhances traditional adaptive optimization methods by intelligently injecting noise into the gradient updates. This approach helps:
- Escape local minima more effectively
- Improve generalization performance
- Achieve faster and more stable convergence
- Reduce overfitting on training data

## Installation

### From PyPI (recommended)
```bash
pip install neat-optimizer
```

### From Source
```bash
git clone https://github.com/yourusername/neat-optimizer.git
cd neat-optimizer
pip install -e .
```

## Quick Start

```python
import tensorflow as tf
from neat_optimizer import NEATOptimizer

# Create your model
model = tf.keras.Sequential([
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

# Use NEAT optimizer
optimizer = NEATOptimizer(
    learning_rate=0.001,
    noise_scale=0.01,
    beta_1=0.9,
    beta_2=0.999
)

# Compile and train
model.compile(
    optimizer=optimizer,
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

model.fit(x_train, y_train, epochs=10, validation_data=(x_val, y_val))
```

## Key Features

- **Adaptive Learning Rates**: Automatically adjusts learning rates per parameter
- **Noise Injection**: Controlled stochastic perturbations for better exploration
- **TensorFlow Integration**: Drop-in replacement for standard TensorFlow optimizers
- **Hyperparameter Flexibility**: Customizable noise schedules and adaptation rates

## Parameters

- `learning_rate` (float, default=0.001): Initial learning rate
- `noise_scale` (float, default=0.01): Scale of noise injection
- `beta_1` (float, default=0.9): Exponential decay rate for first moment estimates
- `beta_2` (float, default=0.999): Exponential decay rate for second moment estimates
- `epsilon` (float, default=1e-7): Small constant for numerical stability
- `noise_decay` (float, default=0.99): Decay rate for noise scale over time

## Requirements

- Python >= 3.7
- TensorFlow >= 2.4.0
- NumPy >= 1.19.0

## Citation

If you use NEAT optimizer in your research, please cite:

```bibtex
@software{neat_optimizer,
  title={NEAT: Noise-Enhanced Adaptive Training Optimizer},
  author={Your Name},
  year={2025},
  url={https://github.com/yourusername/neat-optimizer}
}
```

## References

- Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
- Neelakantan, A., et al. (2015). Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807.

## License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## Support

For issues, questions, or feature requests, please open an issue on GitHub.