Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's clear is that the hype has reach such a critical mass that people are comfortable enough to publicly and shamelessly extrapolate extraordinary claims based purely on gut feeling. Both here on HN and by complete laymen elsewhere.

AI-optimist or not, that's just shocking to me.



A while back I saw a comment here that was basically like, we don't need to worry about copyrights because they won't matter once LLMs are able to create an OS from scratch for the cost of tokens. Which is just a batshit insane not thought through stance to have.

But then I think about the real actual planning decisions that were made based on the claims about driving cars and Hyperloop being available "soon" that made people materially worse off due to differed or canceled public transportation infrastructure.


> people are comfortable enough to publicly and shamelessly extrapolate extraordinary claims based purely on gut feeling

What's the problem with that? Why shouldn't people feel comfortable sharing their vision of the future, even if it's just a "gut feeling" vision? We're not going to run out of ink.


I guess I expect higher standards than the kind of confident extrapolation you find in pseudo-science. And "vision of the future" is your euphemistic rewrite. If that's clearly stated I obviously have no problem with people's fanciful speculation. But these are claims in the format: "X will be replaced in a couple of years, how should we adapt as a society?" etc etc.


Although if there is some small possibility that X will be replaced in a couple of years shouldn't people be able to consider it?


Small possibility some natural disaster ends all life on earth in two years. Large possibility it will within decades

But hey more fun to pretend the chatbot will turn into Terminator


Well... whether the machines take over or just give us a common enemy, at this stage, the rise of Skynet seems to me like the highest likelihood path for global coordination.


Well, since it's hard to prove a negative, that then applies to basically everything. Look, people can "consider" whatever they want to. But I'll certainly complain about the normalization of making sloppy extraordinary claims on gut feeling. It's like some /r/wallstreetbets user trying to predict the future by the shapes & patterns of stock graphs. It's intellectually lazy and suggests lacking critical thinking.


I don’t doubt this but it might help to include some examples if you have any close at hand.


From my perspective it's basically in every other comments section of AI related articles. Here's a particularly spicy one from today: https://news.ycombinator.com/item?id=44646797




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: