Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If we compare the WANN architecture evolved for MNIST, to a simple one layer NN (e.g. a linear classifier, where inputs are connected directly to the outputs with different weights), it seems to me that the evolved architecture uses activations and skip connections to simulate weights. Therefore given enough variety in the arrangement of activation functions and number of hops, each pixel position ends up being weighted in a unique but consistent way. I wonder if the authors tried to establish the correspondence between the weights in a linear classifier, and path modulations from input pixel to the output. To put another way, is there an equivalent weight arrangement for a linear classifier to the WANN with a shared weight?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: