What is "this" exactly? Is it a well-known author or website? Or otherwise a reference that one should be familiar with? It looks like a random blog to me... with an opinion declared as fact that's quite easy to refute.
Because it is terribly low-effort. People are here for interesting and insightful discussions with other humans. If they were interested in unverified LLM output… they would ask an LLM?
Who cares if it is low effort? I got lots of upvotes for my link to Claude about this, and pncnmnp seems happy. The downvoted comment from ChatGPT was maybe a bit spammy?
It's a weird thing to wonder after so many people expressed their dislike of the upthread low-effort comment with a down vote (and then another voiced a more explicit opinion). The point is that a reader may want to know that the text they're reading is something a human took the time to write themselves. That fact is what makes it valuable.
> pncnmnp seems happy
They just haven't commented. There is no reason to attribute this specific motive to that fact.
I don't think it's rude, it saves me from having to come up with my own prompt and wade through the back and forth to get useful insight from the LLMs, also saves me from spending my tokens.
Also, I quite love it when people clearly demarcate which part of their content came from an LLM, and specifies which model.
The little citation carries a huge amount of useful information.
The folks who don't like AI should like it too, as they can easily filter the content.