Spaces:
Running
Running
Commit
·
9d4c050
1
Parent(s):
f1cd245
Update README
Browse files
README.md
CHANGED
|
@@ -69,18 +69,11 @@ optional arguments:
|
|
| 69 |
|
| 70 |
## Modification
|
| 71 |
|
| 72 |
-
You can
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
const unaops = [sin, cos, exp];
|
| 76 |
-
```
|
| 77 |
-
E.g., you can add the function for powers with:
|
| 78 |
-
```julia
|
| 79 |
-
pow(x::Float32, y::Float32)::Float32 = sign(x)*abs(x)^y
|
| 80 |
-
const binops = [plus, mult, pow]
|
| 81 |
-
```
|
| 82 |
|
| 83 |
-
You can change the dataset here:
|
| 84 |
```julia
|
| 85 |
const X = convert(Array{Float32, 2}, randn(100, 5)*2)
|
| 86 |
# Here is the function we want to learn (x2^2 + cos(x3) - 5)
|
|
@@ -89,24 +82,6 @@ const y = convert(Array{Float32, 1}, ((cx,)->cx^2).(X[:, 2]) + cos.(X[:, 3]) .-
|
|
| 89 |
by either loading in a dataset, or modifying the definition of `y`.
|
| 90 |
(The `.` are are used for vectorization of a scalar function)
|
| 91 |
|
| 92 |
-
### Hyperparameters
|
| 93 |
-
|
| 94 |
-
Annealing allows each evolutionary cycle to turn down the exploration
|
| 95 |
-
rate over time: at the end (temperature 0), it will only select solutions
|
| 96 |
-
better than existing solutions.
|
| 97 |
-
|
| 98 |
-
The following parameter, parsimony, is how much to punish complex solutions:
|
| 99 |
-
```julia
|
| 100 |
-
const parsimony = 0.01
|
| 101 |
-
```
|
| 102 |
-
|
| 103 |
-
Finally, the following
|
| 104 |
-
determins how much to scale temperature by (T between 0 and 1).
|
| 105 |
-
```julia
|
| 106 |
-
const alpha = 10.0
|
| 107 |
-
```
|
| 108 |
-
Larger alpha means more exploration.
|
| 109 |
-
|
| 110 |
One can also adjust the relative probabilities of each operation here:
|
| 111 |
```julia
|
| 112 |
weights = [8, 1, 1, 1, 0.1, 0.5, 2]
|
|
@@ -125,11 +100,10 @@ for:
|
|
| 125 |
# TODO
|
| 126 |
|
| 127 |
- [ ] Hyperparameter tune
|
|
|
|
|
|
|
| 128 |
- [ ] Add mutation for constant<->variable
|
| 129 |
-
- [ ] Create a Python interface
|
| 130 |
- [ ] Create a benchmark for accuracy
|
| 131 |
-
- [ ] Create struct to pass through all hyperparameters, instead of treating as constants
|
| 132 |
-
- Make sure doesn't affect performance
|
| 133 |
- [ ] Use NN to generate weights over all probability distribution conditional on error and existing equation, and train on some randomly-generated equations
|
| 134 |
- [ ] Performance:
|
| 135 |
- [ ] Use an enum for functions instead of storing them?
|
|
@@ -138,6 +112,7 @@ for:
|
|
| 138 |
- Seems like its necessary right now. But still by far the slowest option.
|
| 139 |
- [ ] Calculating the loss function - there is duplicate calculations happening.
|
| 140 |
- [ ] Declaration of the weights array every iteration
|
|
|
|
| 141 |
- [x] Explicit constant optimization on hall-of-fame
|
| 142 |
- Create method to find and return all constants, from left to right
|
| 143 |
- Create method to find and set all constants, in same order
|
|
@@ -148,3 +123,5 @@ for:
|
|
| 148 |
- [x] Optionally (with hyperparameter) migrate the hall of fame, rather than current bests
|
| 149 |
- [x] Test performance of reduced precision integers
|
| 150 |
- No effect
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
## Modification
|
| 71 |
|
| 72 |
+
You can add more operators in `operators.jl`, or use default
|
| 73 |
+
Julia ones. Make sure all operators are defined for scalar `Float32`.
|
| 74 |
+
Then just call the operator in your call to `eureqa`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
+
You can change the dataset in `eureqa.py` here:
|
| 77 |
```julia
|
| 78 |
const X = convert(Array{Float32, 2}, randn(100, 5)*2)
|
| 79 |
# Here is the function we want to learn (x2^2 + cos(x3) - 5)
|
|
|
|
| 82 |
by either loading in a dataset, or modifying the definition of `y`.
|
| 83 |
(The `.` are are used for vectorization of a scalar function)
|
| 84 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
One can also adjust the relative probabilities of each operation here:
|
| 86 |
```julia
|
| 87 |
weights = [8, 1, 1, 1, 0.1, 0.5, 2]
|
|
|
|
| 100 |
# TODO
|
| 101 |
|
| 102 |
- [ ] Hyperparameter tune
|
| 103 |
+
- [ ] Add interface for either defining an operation to learn, or loading in arbitrary dataset.
|
| 104 |
+
- Could just write out the dataset in julia, or load it.
|
| 105 |
- [ ] Add mutation for constant<->variable
|
|
|
|
| 106 |
- [ ] Create a benchmark for accuracy
|
|
|
|
|
|
|
| 107 |
- [ ] Use NN to generate weights over all probability distribution conditional on error and existing equation, and train on some randomly-generated equations
|
| 108 |
- [ ] Performance:
|
| 109 |
- [ ] Use an enum for functions instead of storing them?
|
|
|
|
| 112 |
- Seems like its necessary right now. But still by far the slowest option.
|
| 113 |
- [ ] Calculating the loss function - there is duplicate calculations happening.
|
| 114 |
- [ ] Declaration of the weights array every iteration
|
| 115 |
+
- [x] Create a Python interface
|
| 116 |
- [x] Explicit constant optimization on hall-of-fame
|
| 117 |
- Create method to find and return all constants, from left to right
|
| 118 |
- Create method to find and set all constants, in same order
|
|
|
|
| 123 |
- [x] Optionally (with hyperparameter) migrate the hall of fame, rather than current bests
|
| 124 |
- [x] Test performance of reduced precision integers
|
| 125 |
- No effect
|
| 126 |
+
- [x] Create struct to pass through all hyperparameters, instead of treating as constants
|
| 127 |
+
- Make sure doesn't affect performance
|