Hacker Newsnew | past | comments | ask | show | jobs | submit | mcfry's commentslogin

Something which I haven't been able to fully parse that perhaps someone has better insight into: aren't transformers inherently only capable of inductive reasoning? In order to actually progress to AGI, which is being promised at least as an eventuality, don't models have to be capable of deduction? Wouldn't that mean fundamentally changing the pipeline in some way? And no, tools are not deduction. They are useful patches for the lack of deduction.

Models need to move beyond the domain of parsing existing information into existing ideas.


That sounds like a category mistake to me. A proof assistant or logic-programming system performs deduction, and just strapping one of those to an LLM hasn't gotten us to "AGI".


A proof assistant is a verifier, and a tool so therefor a patch, so I really fail to see how that could be understood as the LLM having deduction.


I don't see any reason to think that transformers are not capable of deductive reasoning. Stochasticity doesn't rule out that ability. It just means the model might be wrong in its deduction, just like humans are sometimes wrong.


But it can't actually deduce, can it? If 136891438 * 1294538 isn't in the training data, it won't be able to give you a valid answer using the model itself. There's no process. It has to offload that task to a tool, which will then calculate and return.

Further, any offloading needs to be manually defined at some point. You could maybe give it a way to define its own tools, but even then they would still be defined by what has come before.


They can induct just can’t generate new ideas. Its not going to discover a new quark without a human in the loop somewhere


maybe that's a good thing after all.


This is just... rebranding for instructions and files? lol. Love how instructions for creating a skill is buried. Marketing go brr.


“Skills are a simple concept with a correspondingly simple format.”

From the Anthropic Engineering blog.

I think Skills will be useful in helping regular AI users and non-technical people fall into better patterns.

Many power users of AI were already doing the things it encourages.


What’s next, capabilities? Talents? Hypothalamus.md?


You still have the 'pointy' problem, even with many layers, no? The bottom-most block has to be a triangle.


Was thinking the same thing. It's impossible to take this article's criticisms of AI seriously when it's so obviously over-edited with AI itself.


Maybe the best criticism of it is that it's become synonymous with bad content.


> "Because this is how real learning often arrives: sideways, unscheduled, alive."

And this is how AI slop often arrives: so recognizable it hurts.


It's depressing how common this accusation is become here. Before LLM idiot ruined everything, you know what? People wrote things you wouldn't like, in a way you wouldn't like. Especially on their blogs. HN so smart though they can immediately see, tenured Yale professor has no life and is trying to win the message board game with AI slop!


Nobody in this thread accused LLM of writing the OP. Instead, they are saying that it is dumb and easy in the way a lot of LLM writing is, and that LLMs wouldn't have any problem writing it. This author is being disliked in the traditional way, but with a LLM-assisted proof that actually shows that LLMs can write this crap, and write it well.

The real proposal should be that slate dot com type "Is Food Really Good For You?" or "Hands Are A Completely Unnecessary Part Of The Arm" article authors should be replaced by LLM.

I like the proliferation of LLM slop, because it involuntarily reveals the emptiness of an enormous proportion of actual human writing. You can't help but see it, even if you don't want to. You end up forced to talk about the author's resume in defense.


>It's impossible to take this article's criticisms of AI seriously when it's so obviously over-edited with AI itself.

Someone in this thread accused LLM of writing the OP.


Are you attempting to claim that my identification of AI slop is incorrect?

If so, you're almost certainly wrong.


It's so weird how "AI slop" is generally recognized as a problem, as low quality and doing more harm than good, and at the same time AI is generally considered a huge great thing.

How can AI itself be so great if it's output is literally AI slop, which is basically garbage?


It's good for people who want to take shortcuts to do their work with minimal effort, good for employers who are waiting for it to mature enough to be able to eliminate most employees, and good for the people selling it. Outside of those bubbles, it's seen as garbage. None of these use cases increases quality or makes the world better.


In my experience, LLMs can be pretty great at coding and math.

When it comes to writing ordinary natural language, the constraints are less rigorous, and LLM output tends to focus on rhetoric, where the goal is to fool people into accepting a conclusion rather than actually supporting the conclusion from a logical perspective.


Has the medium become the message?


When was it not?


As Marshall McLuhan might say “The Medium is the Massage”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: