Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are two parts of the knowledge at play here.

1. The trained knowledge included in the parameters of the model

2. The context of the conversation

The 'learning' you are experiencing here is due to the conversation context retaining the new facts. Historically the context windows were very short and as the conversation continues it would quickly forget the new facts.

More recently context windows have grown to rather massive lengths.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: