d130c7cce1
Layers don't need to store the forward pass output
16b6072d80
Improved backward_pass of ActivationLayer
3d024259ee
Removed some old code
9c5b74fb26
Extracted drawing of example_sine into separate fn
e24b05f4bc
Don't let Network consume training nor test data
4c3f56bf2b
Added env var disclaimer
8762852c8d
Added some more example values in .env.example
d328fc0a7b
Extracted subreddit into a new env variable
7386bae4ed
Added prerequisites in README.md
c5b97ccab3
Removed specific dependency versions
c7154817ee
Added support for choosing if the step size should decrease for each subsequent epoch
faa547564c
Added support for choosing weight and bias initializers
e7de925373
Added support for creating matrices with random values
e19dec4af9
Decreased learningRate for each subsequent epoch to reduce the chance of jumping out of a local minimum
74e4d05fa1
Changed project structure
b53328b41c
Added sine example
a3be9daf02
Added LeakyReLu (parameter is currently hardcoded)
f1ca0a9e54
Changed initialization of weights to gaussian distributed values and biases to 1 (this helps (Leaky)ReLu and doesn't seem hurt tanh yet)
281b42b0fb
Added XChart library to plot graphs
adfd701817
Added support for creating matrices with Gaussian distributed values
ffcf9fa975
Fixed typo
e02e79308f
Extracted the gradient descent example into a separate class
db0481e9cf
Added gradient descent for vector-valued functions
1c66f1b72f
The inputSize of each layer does not need to be specified anymore
95501bf4b1
Moved support for adding neurons to FCLayer because BlankLayer is pretty much the same as FCLayer and currently useless
4766ea0ad9
Added additional XOR example with later added neurons
8c82838c54
Added support for adding new neurons
7738781bb5
Added some utility functions (not yet needed)