The post resonates deeply with me.
I am a health professional in diagnostics and through the years I have observed different extremes in approaches to solving diagnostic challenges - the one extreme is to rely on "knowing", the other on "thinking/reasoning". The former is usually very fast, but not easily explainable - just like pattern recognition. The latter was slow, but could give a solution from "first principles" and possibly not described before.
Of course it's a spectrum and the thinking part requires and includes the deep enough "knowing" part. One usually uses both approaches on daily work, but I have seen some people who relied much more on knowing than thinking/reasoning, sometimes to the extreme (as in refusing to diagnose a condition on their own because they "have not seen this before").
This! One simple argument is that language is NOT a magical reasoning substance in itself, but a communication medium. It is medium for passing (a) meaning. So first there is a meaningful thought (worth of sharing), then an agent puts a SIGNIFIER on that meaningful thought, then communicates it to the recipient. Communication medium can be a sentence, it can also be an eyewink or a tail wiggle. Or a whistle. The "language" can be created on the spot, if two subjects get a meaning of signifier by intuition (e.g. I look at the object, you follow my gaze).
So the fallacy of the whole LLM field is the belief that language has some intrinsic meaning. Or if you mix the artifacts of language in some very smart way, the meaning will emerge. But it doesn't work if meaning occurs before the word. The text in books has no reasoning, it was authors. The machine shuffling the text fragments does not have a meaningful thought. The engineer which devised a shuffling machine had some meaningful thought, the users of the machine have same thoughts, but not the machine itself.
To put it another way, if there was an artificial system capable of producing meaningful thoughts, it is not a presence of language which produces a proof, it's communication. Communication requires an agent (as in "agency") and an intent. We have neither in LLM.
As to the argument that we ourselves are mere stochastic parrots - of course we can produce word salads, or fake mimics of coherent text, it is not a proof that LLM IS the way our minds work. It is just a witness to the fact language is a flexible medium for the meanings behind - it can just as well be used for cheating, pretending, etc.
> One simple argument is that language is NOT a magical reasoning substance in itself, but a communication medium.
I'm wildly impressed by how many people think language is thinking. My best guess is they're conflating inner speech with thinking. But if you can't figure out that the words you vocalize aren't an isomorphic representation of the things you try to convey, well... It's hard for me to believe you've spent enough time thinking about what it means to think. Miscommunication is quite common and so there's sufficient feedback to learn this without being explicitly taught. Then again, there are people in the world that I fear...
I think that there is a possible misconception, that evolution allows (or results in) only in necessary features and thus all phenomenons must be a consequence of evolutionary advantage. Yet, if genetic changes are random (at least some of them), some features could exist just because there was no evolutionary pressure to lose them.
It's fascinating that we have simple primitives or notions of analysis, deduction, causation, yet no artificial system where those features of intelligence emerge on their own.
Not talking about agi. I am doing proof by contradiction. Yes current models of ML are primitive but the reasoning attributes you brought up have been reproduced in ML.. albeit in a primitive way.
It is still proof by contradiction, what you say is categorically not true.
The electric field is generated by neurons as they fire action potentials, but it is rather the aggregate electrical activity that is used to represent memories, rather than the summative signals from individual neurons.
And if I understood the article correctly, the important idea is that the field is (relatively) stable even as the underlying neurons involved change and they don't even have to be the same neurons involved for the same memory field.
The illusion of a stable single entity emergent from the constant flux of billions of individual living units linked through trillions of connections is in my opinion the answer to the question of consciousness (and also an answer to the question 'can machines ever be conscious', which is without a doubt - yes).
It might also pose the question then, could humanity as a whole - as a global connection of billions of individuals - be thought of as a conscious entity? Well, maybe not - where's the stable field?
Neurons don't know they form part of a consciousness. On the other hand, we can't communicate with our neurons. It could be that an emergent consciousness exists for which we are the neurons and neither it knows we exist or we know it exists.
I am a practicing pathologist and I have seen many attempts and publications to use ML in pathology, which all lack in these aspects: 1. ML is trained on simplified sets (preselected ROIs, limited choice of diagnoses), 2. ML is biased by the experts who labeled learning sets, 3. there is no obvious process of learning from failures after initial training, 4. who is responsible in case of ML error with substantial consequences for patient?
The first point is especially for the lack of better word.. wishful. In the daily practice we are used to account for "things unexpected" - non-representative biopsies, parasites in tissue where tumor was suspected, foreign body reaction from previous operations, laboratory accidents (such as swapped paraffin blocks of two patients), and so on (the list is much longer). We deal with it. That ML can discern between 5 most common diagnoses is fine, but it is rather narrow problem to solve.
Thanks for the very insightful response, you're right. All those problems are barriers to using the ML practically. So it seems like the technology actually is the problem.