RMS might say yes, here's from the linked page describing other things as having knowledge and understanding:
> There are systems which use machine learning to recognize specific important patterns in data. Their output can reflect real knowledge (even if not with perfect accuracy)—for instance, whether an image of tissue from an organism shows a certain medical condition, whether an insect is a bee-eating Asian hornet, whether a toddler may be at risk of becoming autistic, or how well a certain art work matches some artist's style and habits. Scientists validate the system by comparing its judgment against experimental tests. That justifies referring to these systems as “artificial intelligence.”
Thanks -- that's not at all clear in this post (nor is it clear from the link text that its target would include a more complete description of his position).
I've updated my comment in response to this. Basically: It seems his key test is "Is someone validating the output, trying to steer it towards ground truth?" And since the answer re ChatGPT and Claude is clearly "yes", then ChatGPT clearly does count as an AI with semantic understanding, by his definition.
> There are systems which use machine learning to recognize specific important patterns in data. Their output can reflect real knowledge (even if not with perfect accuracy)—for instance, whether an image of tissue from an organism shows a certain medical condition, whether an insect is a bee-eating Asian hornet, whether a toddler may be at risk of becoming autistic, or how well a certain art work matches some artist's style and habits. Scientists validate the system by comparing its judgment against experimental tests. That justifies referring to these systems as “artificial intelligence.”