Hacker Newsnew | past | comments | ask | show | jobs | submit | soerxpso's commentslogin

He doesn't include the best solution in the 'what actually works' section: Give your LLM the same level of permissions that you would give a human you just hired in the same role. The examples given, tricking the customer support LLM into sending text messages to all users, or into transferring money, are not things that you would ever give a human customer support agent the tools to do. At some businesses that employ humans, you have to demonstrate good judgement for months before they even let you touch the keys to the case that has the PS5 games in it.

I haven't encountered a support person so locked down that they couldn't do anything impactful. Even simple things like booking or canceling appointments has financial consequences.

This is just counting pypi packages. Why would I go to the effort of publishing a library or cli tool that took me ten minutes to create? Especially in an environment where open source contributions from strangers are useless. If anything I'd expect useful AI to reduce the number of new pypi packages.

I don't see how this is more degenerate than betting on roulette at a casino. Prediction markets usually provide more efficient odds than casinos because the house profits from trading volume instead of from the spread, so it's essentially just a way to bet on a game of complete chance with a much better average-loss than you could get on games of pure chance in the past. If people want to bet on coinflips, it seems objectively better that they have access to a way to do that in a way where they only get fleeced for 1% of their bet rather than 5%+ of their bet.

For sporting events, for example, the alternative to prediction markets 5-10 years ago was to use a website where you bet against the house directly, and they'd usually take around a 15-20% spread, and they'd ban you and keep your account funds if they decided you're winning too much. Now you can bet on the same events on prediction market sites, with around a 1-5% spread, and the house doesn't care how much you win (so there's actually an argument that you're playing a game of skill, compared to the old format where you definitely weren't, since you'd be banned for being too skilled).


The embedded programs can be connected to the other weights during training, in whatever way the training process finds useful. It doesn't just have to be arithmetic calculation. You can put any hard-coded algorithm in there, make the weights for that algorithm static, and let the training process figure out how to connect the other trillion weights to it.


I don't think anyone is using LLMs for those conversations. A lot of those replies are bots. There's a market for reddit accounts that have a solid human-looking reply/post history, to be used for astroturf marketing, so some organizations set up bots to grow such accounts. There probably are also just people who overuse "Honestly? [statement]" sentences. I've spoken to such people in person before LLMs.


> borrowing ordinary mannerisms of speech that aren't necessarily egregious

That's how a trope starts. When a minority of writers are using a particular pattern, it's personalized style. When a majority of writers in a genre adopt the same personalized style, it's a trope.

We find AI tropes especially annoying because there are three frontier LLMs producing a sizable chunk of text we read (maybe even a majority of text, for some people) lately. It would also be annoying if a clique of three humans were producing most of the text we read; we'd start to find their personal styles annoying and overdone. Even before LLMs, that was a thing that happened in some "slop" fiction genres where a particularly active author would churn out dozens of novels per year in one style (often via ghostwriters, but still with a single style and repetitive plot pattern).


"hacker" news is owned and operated by a large and wealthy venture capital firm


we all know, pehaps they should give up the name and call it vcnews, since so many people get triggered when hacker topics are discussed here.


No it's not. It's based on how much he "made" in the first half of 2020, mostly originating from gains in Amazon's stock, in a period specifically selected to inflate the number. If you actually want to display how much Bezos made since the user opened the page, there are many public APIs to get live stock data and you could show the actual live gain/loss. But that wouldn't really support the point you're trying to make, since there would be days where he actually loses more money than most people ever see.


Also anyone who owns Amazon stock in their investment accounts - or who owns the S&P 500 - is also making and losing money when Jeff Bezos makes or loses money, for exactly the same reason.


For most tasks it's not necessary. For hairy tasks, it's often nice to switch and pay 10x the cost to complete the task with 10x less intervention.


He kept 80%. The other 20% is owned by 8 different VCs. Seems like he's still in control. There's value in using other people's money instead of your own because it might make him less emotionally risk-averse in how he manages it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: