Hacker Newsnew | past | comments | ask | show | jobs | submit | vannevar's commentslogin

>"Ads are the only way we've found that actually implements a form of microtransactions... paying a tenth of a penny for a sliver of attention."

Ads were the path of least resistance, and once entrenched, they effectively prevented any alternative from emerging. Now that we've seen how advertising scales, and how it's ruined our mediascape, we're finally looking at alternatives. Not dissimilar to how we reacted to pollution, once we saw it at scale.


All of the controversy over the heat shield is obscuring the much bigger safety issue: Artemis has had only a single unmanned test flight. By contrast, the Saturn launch system had seven successful unmanned tests before being trusted with a crew, including two unmanned flights of the complete Saturn V stack. And even then, three astronauts were lost during ground testing of the crew capsule due to a critical design flaw. Artemis's closest modern counterpart, the SpaceX Starship, has had 11 test flights, several of which resulted in loss of the vehicle. There is no reason to believe that Artemis has a significantly higher reliability rate than Starship or Saturn V. Even without the heat shield controversy, this is the most dangerous mission NASA has launched since the first flight of the Space Shuttle.

> Artemis's closest modern counterpart, the SpaceX Starship, has had 11 test flights, several of which resulted in loss of the vehicle.

I don't think you can compare the two. Starship's risks are so high failure is almost the expected outcome, it's a trial and error based process. Starship and Artemis is an apples/oranges comparison with respect to how the programs approach risk tolerance.


Until Artemis actually flies a comparable number of missions, any advantage in reliability is pure speculation. Which is not a good way to approach crewed spaceflight. I don't think the two programs are as different as you think, prospectively: both take great care to ensure that their vehicles don't fail. Starships may be cheaper than the SLS, but they're still very expensive. SpaceX doesn't go into a flight expecting to lose a vehicle. The difference in culture is more in the reaction to failure. As a private company, SpaceX moves very quickly in the wake of failure, whereas NASA has in recent decades become much more cautious once a failure has occurred. And while you say SpaceX is more tolerant of risk, I would note that they've never flown a crew on a launch vehicle that had only one previous unmanned launch. Falcon 9 had 85 unmanned launches before there was a crew aboard. And they expect to launch 100 unmanned Starships before they fly one with a crew.

Now which program seems the more risk tolerant?


SpaceX was clear about their policy of flight testing earlier in the development phase. They expected to lose rockets, I do not believe those should count against the launcher.

They do not expect to lose a given vehicle. They are tolerant of losing some vehicles over time, because they understand that every flight may be affected by unknown unknowns. There is certainly no evidence that they expect to lose crewed vehicles, or that they are tolerant of crew loss.

I think the high loss rate for Starship can largely be traced back to the choice of using steel for the vehicle, which drastically reduces margins across the system. You could certainly say that they had a higher expectation of failure because they made that choice. In that sense, I understand your point. But to the best of their ability, they try to fly every vehicle successfully.


Same here. Coupled with configuring the agent's email account at the provider to only be able to send and receive to my email address.

In a one-shot scenario, I agree. But LLMs make iteration much faster. So the comparison is not really between an AI and an experienced dev coding by hand, it's between the dev iterating with an LLM and the dev iterating by hand. And the former can produce high-quality code much faster than the latter.

The question is, what happens when you have a middling dev iterating with an LLM? And in that case, the drop in quality is probably non-linear---it can get pretty bad, pretty fast.


The article's central premise is based on a false assumption, which is that people taking UBI will be idle. There is no significant evidence to support that claim. The scant evidence we have so far on UBI is largely limited to relatively small numbers of people in poverty given small amounts of money insufficient to provide any opportunity for savings, and even that evidence is at best mixed. On the other hand, there are many people who receive an inheritance large enough that they never need to work again, yet the vast majority of those people are not idle but actively create new businesses and take on other projects or hobbies.

And the reason that our infrastructure is crumbling is not some social problem, nor some intrinsic "undervaluing of the future," but something simpler and more pragmatic: our taxation has not kept up with our necessary spending, particularly taxation of the wealthy as wealth has concentrated at the top. Everyone's talking about abundance as if it is something that is yet to come, but we've had rapidly increasing abundance for 50 years, as technology has made the individual worker more productive. And the vast majority of that increase in productivity has been turned into increased wealth for the top 10%. UBI would be the first reversal of that trend, requiring a massive tax on the productivity of AI and robotic infrastructure that in all likelihood will be 90% owned by the wealthiest top 10%. Naturally, they are concerned about that prospect, and so we see articles like this one.


> The article's central premise is based on a false assumption, which is that people taking UBI will be idle. There is no significant evidence to support that claim.

Absolutely true. Even meta-analyses of all UBI experiments to date - encompassing tens of thousands of adults - shows an increase in labour participation, not a decrease.

And if formal, capitalistic, profit-based jobs are no longer available, what barrier do we have against creating social jobs that need doing? Just because the Parasite Class cannot extract obscene amounts of wealth from those jobs doesn’t mean they don’t need doing. It just means there is no profit angle to have in doing them.

If I had no worries about my needs, I would love to work on open-source projects. Failing that, it would be ecosystem restoration or bioremediation. All jobs that can be free of government and capitalism, but which desperately needs bodies to yeet ant the issues at hand.


There is much work to be done in society that needs doing.

Some of it can be new jobs, and some of it can be done by making the existing jobs have less hours freeing people up to do more meaningful things.


Being less efficient is also a problem, because if majority becomes less efficient (lower productivity), the overall wealth and economic growth of that society are going to decline significantly.

We do have evidence that when money is not a problem, we become less efficient. For example, monopolies or state run companies.

Just the first result from google: https://www.mdpi.com/2227-7390/11/3/657

Another problem with UBI is that, if we want for UBI to cover basic costs of living, these expenses are actually quite big as UBI essentially would need to cover things like rent, food and health services. Otherwise we will still have plenty of homeless people with UBI.


Exactly - when UBI is tried people tend to be busy.

The super rich are often idle and project that onto everyone else we should not take the idle richs word as gospel.


> The article's central premise is based on a false assumption, which is that people taking UBI will be idle. There is no significant evidence to support that claim.

You really think there would not be a massive increase in the number of coach potatoes, watching netflix and doomscrolling tiktok all day long? Where do they make such optimists? It's almost as if this very website has a strong selection bias, congregating people with higher than average drive, who would never, who just can't imagine not having it. And even if they won't be technically idle, you can bet your ass that the total supply of labor would drop like a rock, and many jobs that are generally beneficial to the society but not glamorous wouldn't be done. You also completely ignore the massive problem which is the shift in the society's collective psychology in regards to work, which the article did mention. Quote:

The problems are significant, however. First, all existing pilots are small in scale, temporary in duration, and limited to populations already experiencing poverty or precarity. None of them test the psychology of a society in which nobody is economically compelled to contribute. Temporary income relief and permanent unconditional income are fundamentally different phenomena — the first is a cushion, the second is a permanent reorientation of the relationship between individuals and economic necessity. The pilots tell us nothing useful about the second.

Currently we collectively derive personal worth from work etc, and the society applies significant pressure on individuals "incentivizing" them to work even if they can't have a dream job, increasing the aggregate amount of work done. We just don't know and can't really imagine what it's like to live in a world where you are entitled to money for existing, no strings attached, pretty much from cradle to grave. Imagine being a kid who grows up in such a world with no real responsibilities, playing vidya all day long, who knows that once he formally reaches adulthood, he can just continue doing nothing. The model of family life is falling apart as we speak, so why bother chasing it? Just lower your expectations and desires - and you are set for life.


>We just don't know and can't really imagine what it's like to live in a world where you are entitled to money for existing, no strings attached, pretty much from cradle to grave.

Sure we can. As I noted, wealthy people live in this world already. And we don't see all of them turning into couch potatoes once they have passive income equal to UBI. Sure, there's a human tendency to enjoy leisure. But there's also a human tendency to enjoy work. And a human tendency to project negative attributes onto others we don't know. ;-)


And if they actually constructed the deal that way, it would be fine. But by essentially creating a sham sale where they return the cash back to the customer in return for equity, Nvidia can book revenue and claim non-existent cash flow. The key is that the sale would not have happened without the corresponding equity deal. Nvidia had no discretion to use that cash any other way, so the "cash flow" in that case is illusory.


I don't see the issue. Goods valued at that amount changed hands. Why shouldn't bartering be booked as cash flow? The regulator is going to require you to value it for them regardless.


Wall Street places a value on sales, on the assumption that the sale means a customer had the money and the desire to buy the company's goods. In this case, OpenAI had the desire but not the money---Nvidia basically gave them the money to buy the product. So that "sale" should be devalued in the market. What if Nvidia paid more for the stock than the chips were worth? Now they're essentially paying people to buy their product and hiding the bribe in an equity deal by overvaluing the customer. The market sees the big growing sales number and buys Nvidia stock on the assumption that the growth is organic. It also sees Nvidia putting a big valuation on OpenAI, driving up that company's value at well. At some point, OpenAI ends up with more chips than it needs and Nvidia ends up holding a bunch of overvalued OpenAI stock instead of cash. And both stocks eventually crash as a result.

Does that clarify the situation?


Not really. Or rather I think we both agree and disagree. Dysfunction is always possible (that's why we have regulation) and if you want to make a case that what happened between OpenAI and Nvidia ought to be against the rules that could certainly make for an interesting discussion.

However it's not at all uncommon for large sales agreements to come with additional strings attached. On its face I don't see how this example is any different.

If my company wanted to barter with another company to exchange equity for infrastructure how would you expect that to be reported? Did this situation differ from that expectation?

> What if Nvidia paid more for the stock than the chips were worth?

I'm not sure. It's an interesting question. Were the unit prices (ie chip and stock quantities) made public?


>If my company wanted to barter with another company to exchange equity for infrastructure how would you expect that to be reported? Did this situation differ from that expectation?

As I mentioned, I would have no problem if that's what happened. But it isn't. Nvidia recorded the cash as ordinary income. They did NOT record the stock as income. Cash has a clear value; stock does not. You keep reducing the transaction to its effective outcome, which is not where the problem lies, as I outlined above.


I haven't reduced it rather I've asked you why you think it shouldn't be reduced.

So your objection is the way in which they did the accounting? This is not an area I'm particularly familiar with. Does the way they went about it fall outside of the norm for the US? Or is your objection a more general one regarding the US regulations on the matter?

I understand you object but I don't quite follow why. When it comes to manipulating the OpenAI valuation couldn't Nvidia have intentionally overpayed for the stock in cash? Wouldn't that have provided the exact same quantity of capital, the exact same investment, the exact same valuation?

Maybe it would be different if their GPUs weren't in such demand but they are. Even in such a case, they could have structured the transaction as a series of smaller independent ones. Same ultimate outcome.


>So your objection is the way in which they did the accounting?

Yes, the accounting is the problem. As I said from the outset, if they actually just traded chips for stock, it would not be an issue.


In the US, since the 1970s virtually all technologically-driven productivity gains have been captured by the top 10% (who own 90% of all public equity). (See, e.g., https://www.epi.org/productivity-pay-gap/ .)

So no, little or none of the AI productivity gains will go to workers, barring significant changes in public policy like universal basic income and the massive tax increases necessary to implement it.


What productivity gains should go to the workers? I'm happy to collect my current salary + the usual raises to date, in exchange for transitioning my job to increasing amounts of AI use if required. If I lived like people in the 1970s I could have retired after 10-20 years in the industry, easily. Sooner if I prioritized money. My standard of living has grown immensely to the current level which I'm happy to maintain. I formerly shared a 3 bedroom leaky house with 5 dudes by the train tracks.

Do the productivity gains from stack overflow "belong" to the workers? Do the productivity gains from better google search or wikipedia? If I want to charge by the product instead of hour - I can leave and start consulting or contracting.

How should those productivity gains go to employees? I own equity in this company, and most others I have formerly worked for. I have free access to public equity markets, tax subsidized 401k and IRA plans which have historically enriched 100s of millions.

I've been through layoffs where I didn't have to take a day of reduced salary or unemployment because of severances nor did I experience a single day of health care coverage lapse b/c of same, or b/c of spouse's insurance or public marketplace.

My situation is not unique to software engineers in the USA. The public market, cheap/subsidized retirement plans + equity bonuses have effectively seized a good portion of the means of production for the workers, allowing consistent enrichment from expansion of industry. And corporate america for all it's evils has produced the entirety of the social safety net that I've used after I graduated, and that is while changing jobs basically whenever I wanted (plus one time I didn't)

Guys, I don't see it yet. Maybe I'm dumb, but it's not here yet and there is a case to be optimistic that everyone is ignoring. Prepare for the worst, but once prepared, banish it from your mind.


I'd highly recommend working top down, getting it to outline a sane architecture before it starts coding. Then if one of the modules starts getting fouled up, start with a clean sheet context (for that module) incorporating any cautions or lessons learned from the bad experience. LLMs are not yet good at working and reworking the same code, for the reasons you outline. But they are pretty good at a "Groundhog Day" approach of going through the implementation process over and over until they get it right.


+1 if you are vibe coding projects from scratch. if the architecture you specify doesn't make sense, the llm will start struggling, the only way out of their misery is mocking tests. the good thing is that a complete rewrite with proper architecture and lessons learned is now totally affordable.


I think the best thing about LLMs is how incredibly easy they make it to build one to throw away.

I've definitely built the same thing a few times, getting incrementally better designs each time.


>The companies are giving average lay people access to a personal PhD to help with whatever they are working on, for $20/mo, and those companies are committing an evil cardinal sin?

The social media companies gave their services for free, and now it turns out they've committed quite a few sins. None of the AI companies are doing this out of the goodness of their hearts, nor will they be satisfied with subscription revenue. If they see opportunities to make more money by manipulating the population, rest assured they will take those opportunities.


The enormous difference between vibe-coding and 3D printing is that vibe-coding is improving exponentially at a rapid rate, while 3D printing is improving linearly at a slow rate. Very little that we say about vibe-coding today is likely to be valid even six months' from now, whereas a 3D printer sold 5 years from now will probably be very similar to one sold today.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: