Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

differentiating between puppy and a husky in a snowy background without being trained in millions of images?


I wouldn't say humans are so different. You could argue we've been trained on about one quadrillion bytes of visual data by the time we're 4 years old: https://x.com/ylecun/status/1750614681209983231


I would say as counter, a child, pseudo-random training by parents and environment. Not sure what price tag to put on this, but in comparison, LLMs, how many billions, to reach what level of competency exactly?


GPT-4 is also really bad right now about comprehending “new” software libraries (even when I ask it to scrape the web).


Why does it matter how it was trained?


Because that tells us how you approach novel problems. If you need tons of data to solve a novel problem that makes you bad at solving novel problems, while humans can get up to speed in a new domain with much less training and thus solve problems the LLM can't.

Thus AGI needs to be able to learn something new with similar amounts of data as a human, or else it isn't an AGI as it wont be even close to as good as a human at novel tasks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: