There's nothing weird about so called hallucination ("confabulation" would be a better term), it's the expected behavior. If your use case cannot deal with it, it's not a good use case for these models.
And yes, if you thought this means these models are being commonly misapplied, you'd be correct. This will continue until the bubble bursts.
And yes, if you thought this means these models are being commonly misapplied, you'd be correct. This will continue until the bubble bursts.