Hacker Newsnew | past | comments | ask | show | jobs | submit | Garlef's commentslogin

“Everything has already been said, but not yet by everyone.” — Karl Valentin

---

Personally, I'm still very interested in the topic.

But since the tech is moving very fast, the discussion is just very very unevenly distributed: There's lots of interesting things to say. But a lot of takes that were relevant 6 months ago are still being digested by most.


> “Everything has already been said, but not yet by everyone.” — Karl Valentin

Never heard this and I like it very much. This is just an off-topic comment to say thanks!


This is a great saying, thank you for sharing it. Out of curiosity, do you have any links to intersting AI articles you've read recently? Maybe I'll change my mind.

https://www.youtube.com/watch?v=QWzLPn164w0

I don't like the hype language applied by the channel host one bit - and so this is not something where I expect someone tired of the hype to be swayed - but I think his perspective is sometimes interesting (if you filter through the BS): He seems to get that the real challenge is not LLM quality but organisational integration: Tooling, harnesses, data access, etc, etc. And so in this department there's sometimes good input.


Thank you for the rec and review, I’ll take a look!

Isn't this a common architecture in CQRS systems?

Commands to go specific microservices with local state persisted in a small DB; queries go to a global aggregation system.


> AI has also changed the dynamics around this. Splitting things into smaller components now has a dev advantage because the AI program better with smaller scope

This is not AI specific and nothing new and also precisely why microservices are a good solution to some problems: They reduce a teams cognitive load (if architected properly, caveats, team topologies, etc, etc)


But: Will it also let him win at Settlers of Catan?

No problem: Just build a subterranean boat and launch a few nukes close to the core to restart rotation.

"look ma, I've made the AI fail!"

Maybe they should implement a graph based trust system:

You need your favourite academic gatekeeper (= thesis advisor) to vouch for you in order to be allowed to upload.

Then AI slop gets flagged and the shame spreads through the graph. And flaggings need to have evidence attached that can again be flagged.


The endorsement system already works along that line: https://info.arxiv.org/help/endorsement.html

It's probably not perfect but in practice, it seems to have been enough to get rid of the worst crackpotty spam.


They already had a basic form of this for a while [1]

> arXiv requires that users be endorsed before submitting their first paper to arXiv or a new category.

[1] https://info.arxiv.org/help/endorsement.html


I've often thought that similar trust systems would work well in social media, web search, etc., but I've never seen it implemented in a meaningful way. I wonder what I'm missing.

Lobsters has this I think. But it also means I've never posted there.


Science reduced to people with a phd?

not a bad first order filter.

can you think of a better one?


The whole point of the scientific method was that we could ignore the source of the information, and were instead expected to focus on the value of the information based on supporting evidence (data).

If we go back to "Only people that have been inducted into the community can publish science" we're effectively saying that only the high priests can accrue knowledge.

I say this knowing full well that we have a massive problem in science on sorting the wheat from the chaff, have had so for a VERY long time, and AI is flooding the zone (thank you political commentator I despise) with absolute dross.


> anthropomorphism

I think it's a topic worthy of discussion. But I would propably not leave it to Searle...


serious question:

> no change in survival rates

> less series A

would this not imply that companies got more efficient at using their seed funding?

(But then again: The real dip in series A funding starts in 2018; so we might still see a dip in 10y survivability starting 2028)


I think restrcicting this discussion to LLMs - as it is often done - misses the point: LLMs + harnesses can actually learn.

That's why I think the term "system" as used in the paper is much better.


> LLMs + harnesses can actually learn.

No. No, they don't


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: