News Score: Score the News, Sort the News, Rewrite the Headlines

NoProp: Training Neural Networks without Back-propagation or Forward-propagation

View PDF HTML (experimental) Abstract:The canonical deep learning approach for learning requires computing a gradient term at each layer by back-propagating the error signal from the output towards each learnable parameter. Given the stacked structure of neural networks, where each layer builds on the representation of the layer below, this approach leads to hierarchical representations. More abstract features live on the top layers of the model, while features on lower layers are expected to be...

Read more at arxiv.org

© News Score  score the news, sort the news, rewrite the headlines