Hacker Newsnew | past | comments | ask | show | jobs | submit | 7777777phil's commentslogin

Last year I dug into Pozsar's Bretton Woods III stuff and wrote two posts on it (0), and this is exactly what he was getting at. Pozsar's whole thesis is that after the West froze Russian reserves in 2022, the global monetary system started shifting from inside money (Treasuries, dollar reserves) to outside money (commodities, gold). Gold going from $1,900 to $5,000+ since then kind of makes his point.

A waiver for oil already floating at sea is the regime admitting you can sanction the financial system but you can't sanction the tanker. physical commodity doesn't care about the nominal layer's rules. Every carve-out like this reinforces it imo..

(0) https://philippdubach.com/posts/pozsars-bretton-woods-iii-th...


You certainly can sanction the "tanker". But you need to take police / military action, which is decidedly tougher and riskier.

Worth noting this came out of the Anthropic Fellows programme, not Google. Google released the weights but not the institutional support around them.

Open-weight is not open-source in any way that matters.


I particularly like this framework: how hard is it to describe the task vs. how hard is it to check the output.

What's more, this can be made conditional on one's linguistic intelligence. Some people can simply convert their thoughts into written language much more effectively than others. They have a natural advantage when it comes to writing prompts that actually... work, whereas others might struggle with the results that their prompts produce. It may therefore be crucial to assess the usefulness of generative models relative to oneself, not to a group of people.

Most of what's getting "automated" was never really work, it was headcount that existed because nobody had a good reason to cut it yet. AI gave the reason..

The thermal decoupling is nice, CPU and GPU don't have to throttle for each other under sustained load anymore.

I was under the impression that Meta's Facebook is essentially already Moltbook (run by bots) so the horizontal integration makes sense..

It is an open question just how much of "social media" has been similar to moltbook for many years. Or maybe Zuckerberg being an android himself just finally found his home.

On a meta note, I like that Tao is publishing his failed attempts alongside the successful one. Two prior recordings didn't work out, one because the machine crashed mid-run, one because he forgot to screen share properly.. and he just tells you that upfront. Most people would quietly delete those and only show the clean take.

For me the interesting part of this video isn't really the AI though. It's how Tao breaks the work apart. First attempt was "just do the whole thing." Ran 45 minutes, crashed the machine, burned through the token budget, produced nothing. Second time he decomposed it into steps and got it done in 25 minutes. By this third attempt he'd written out a whole recipe beforehand:

>I decided to write up here a step-by-step recipe for what we're going to do

Imo that recipe is where the actual value is. He's figured out which subtasks he can hand off and which ones need his eyes on them. At one point he's manually fixing a proof step while Claude skeletonizes the next lemma in the background. That's not "AI did my homework," that's just.. two workers on separate parts of the same job.

>it didn't mind that I was editing something else. It just went ahead and implemented these edits independently, which is great

Thing that surprised me: the agent handled the high-level formalization fine but choked on the mechanical low-level steps.

>it actually struggles a lot with the lowest level steps of the proof actually, which is surprising because I would have thought that would have been the easiest part

He also said something that tbh I keep coming back to when I look at how firms adopt these tools:

>you do need to keep doing that. Otherwise, if you rely too much on these tools and something goes wrong, you may have no idea what to do

I see this constantly. The people getting the most out of coding agents, in my world it's usually quant or strategy work, are the ones who stay close enough to catch when things go sideways. The ones who fully check out just get quietly worse at their jobs in ways that don't show up until something breaks. There's no magic automation dial you set to the right number and forget about. You kind of have to keep adjusting it task by task.. and honestly that judgment call is the hard part, not the tooling.


Happening in the private sector too. Something like 10% of US companies are actually using AI productively, and 42% killed their GenAI pilots in 2024.

After hiQ v. LinkedIn gutted the CFAA for public web data, Google needed a new theory. If courts accept that CAPTCHAs are "technological protection measures" under copyright law, every website with a bot check just gained federal enforcement power against scraping. Built by a company that literally indexed the web for a living..

Don't necessarily agree with this article but I think the framing is interesting.. Most analysis still treats this as a degradation in progress, something the next election reverses.

Modelled as a transition to oligarchy with democratic aesthetics, the prescriptions flip. You stop trying to fix the process and start building parallel institutions. I would assume that some European allies are reaching the same conclusion right now, just from the outside looking in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: