"Contrary to expectation, the synaptic strengths in the pallium remained about the same regardless of whether the fish learned anything. Instead, in the fish that learned, the synapses were pruned from some areas of the pallium — producing an effect “like cutting a bonsai tree,” Fraser said — and replanted in others."
This is a very counter intuitive. So there are existing neural connections (formed somehow previously...) and new memories form by pruning these connections? Crazy
I don't think it is counter intuitive (maybe if you hold a strong opinion, it is.) There's a growing body of evidence that brains, to some degree, learn by removing connections, and in some cases, add connections elsewhere.
I think part of learning is, that we have some general idea about something, but it turns out to be inaccurate, so we need to specialize it (and get rid off the old connection.) I'm speculating though.
I'm a bit confused by the "replanted in others" part though. I'd assume, overall, there is a net loss of synapses when learning (and hence learning becomes more difficult as we age - there just isn't enough entropy to prune away.)
I'm not sure it is quite comparable, as the lottery hypothesis postulates that for every Neural Net with n connections/weights, there is one with n-1 connections, that retains its accuracy. So it doesn't learn by pruning, just becomes more efficient by being smaller. That's at least how I understood it.
Your thing would imply a 0 connected neural net could compete with GPT3. I quote
The Lottery Ticket Hypothesis: A randomly-initialized, dense neural network contains a subnetwork that is initialised such that — when trained in isolation — it can match the test accuracy of the original network after training for at most the same number of iterations. - Frankle & Carbin (2019, p.2)
Yeah, that's why it is merely a hypothesis. I think the Lottery Ticket hypothesis goes further and states that this subnetwork also has a subnetwork which matches test accuracy, up to some degree. Else it wouldn't be interesting. Of course there must be a limit...
The interesting bit is that you don't need to tweak the subnet, only remove the other noisy connections. If you remove capacity, at some point test accuracy needs to go down. The hypothesis is about the idea that we maybe don't "train" the weights, we "find" them, not about unlimited nesting (as far as I understand it, but I have high confidence. Happy to be proven wrong tho)
That’s counter intuitive (at least for you) because in modern society it’s customary to think that people are born tabula rasa and then “it’s all a social construct”. Perhaps we are indeed born with innate preferences, biases, and default sexual orientations …
That's consistent with the way, children react to languages and faces. When we are born, we are open to all languages. When we grow up, we only remain receptive to the language structures to which we had been exposed previously. Of course it's always possible to learn a new language, but it is easier with previous exposure.
I recall the experience of a blind adult getting surgery that allowed for vision, and describing the process of learning visual information gave some idea of what infants experience; first it is all brightness, or darkness, then very vague shapes and primary colors, eventually honing in on discrete objects and hues.
It makes more sense that, without any context, our brain can't just "add" a color to its repertoire, but must interpret some data within the context of existing 'thought'. Some things may be literally unimaginable without a framework of existing memories and experiences, or only possible through some recursive process "Oh so 'a' is 'like' xyz, that makes sense".
I wonder if this means that learning useless information is extra counterproductive in that it may make it difficult to learn other things because you’ve pruned some neutral connections?
"Contrary to expectation, the synaptic strengths in the pallium remained about the same regardless of whether the fish learned anything. Instead, in the fish that learned, the synapses were pruned from some areas of the pallium — producing an effect “like cutting a bonsai tree,” Fraser said — and replanted in others."
This is a very counter intuitive. So there are existing neural connections (formed somehow previously...) and new memories form by pruning these connections? Crazy