Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it depends on how you define intelligence, but _I_ mostly agree with Francois Collet's stance that intelligence is the ability to find novel solutions and adaptability to new challenges. He feels that memorisation is an important facet, but it is not enough for true intelligence ant that these systems excel at type2 thinking but gave huge gaps at type1.

The alternative I'm considering is that It might just be that it's just a dataset problem, feeding these llms on words makes the lack a huge facet of embodied axistance that is needed to get context.

I am a nobody though, so who knows....



I agree, LLM are interesting to me only to the extent that they are doing more than memorisation.

They do seem to do generalisation, to at least some degree.

If it was literal memorisation, we do literally have internet search already.


And who says that LLMs won't be able to adapt to new conditions?

Right now they are limited by the context, but that's probably a temporary limitation.


*Chollet




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: