Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did you read the paper? The authors admit it is only narrowly learning and cannot transfer it's knowledge to unknown areas. From the article:

"we do not expect our language model to generate proteins that belong to a completely different distribution or domain"

So, no, I do not think it displays a fundamental understanding.

>What would be evidence to you?

We've already discussed this ad nauseum. Like all science, there is no definitive answer. However, when the data shows evidence that something like proximity to training data is predictive of performance, it's seems more like evidence of learning heuristics and not underlying principles.

Now, I'm open to the idea that humans just have a deeper level of heuristics rather than principled understanding. If that's the case, it's just a difference of degree rather than type. But I don't think that's a fruitful discussion because it may not be testable/provable so I would classify it as philosophy more than anything else and certainly not worthy of the confidence that you're speaking with.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: