A lot of code is "useless" only in the sense that no one wants to buy it and it will never find its way into an end user product. On the other hand, that same code might have enormous value for education, research, planning, exploration, simulation, testing, and so on. Being able to generate reams of "useless" code is a highly desirable future.
Obviously "useful" doesn't just involve making money. Code that will be used for education and all of these things is clearly not useless.
But let's be honest to ourselves, the sort of useless code the GP meant will never ever be used for any of that. The code will never leave their personal storage. In that sense it's about as valuable for the society at large as the combined exabytes of GenAI smut that people have been filling their drives with by running their 4090s 24/7.
While Rust is excellent, you must acknowledge that Rust has issues with compilation time. It also has a steep learning curve (especially around lifetimes.) It's much too early to say Rust is the "final" language, especially since AI is driving a huge shift in thinking right now.
I used to think that I would never write C code again, but when I decided recently to build something that would run on ESP32 chips, I realized there wasn't any good reason for me to use Rust yet. ESP-IDF is built on C and I can write C code just fine. C compiles quickly, it's a very simple language on the surface, and as long as you minimize the use of dynamic memory allocation and other pitfalls, it's reliable.
If you're programming for ESP, then embassy is the way to go in most cases. You don't need to learn much about lifetimes in most of the application code. Steep learning curve people refer it is "thing blow up at compile time vs runtime." It's easy to write JS or C that passes all tests and compiles and then wonderful blows up when you start using it. It just forces you to learn things you need to know at IMO right now.
My biggest problem with rust right now is enormous target/ dirs.
> My biggest problem with rust right now is enormous target/ dirs.
We're working on that and it should get better soonish. We're working on shared caches, as well as pruning of old cached builds of dependencies that are unlikely to be reused in a future build.
Several months ago, just for fun, I asked Claude (the web site, not Claude Code) to build a web page with a little animated cannon that shoots at the mouse cursor with a ballistic trajectory. It built the page in seconds, but the aim was incorrect; it always shot too low. I told it the aim was off. It still got it wrong. I prompted it several times to try to correct it, but it never got it right. In fact, the web page started to break and Claude was introducing nasty bugs.
More recently, I tried the same experiment, again with Claude. I used the exact same prompt. This time, the aim was exactly correct. Instead of spending my time trying to correct it, I was able to ask it to add features. I've spent more time writing this comment on HN than I spent optimizing this toy. https://claude.ai/public/artifacts/d7f1c13c-2423-4f03-9fc4-8...
My point is that AI-assisted coding has improved dramatically in the past few months. I don't know whether it can reason deeply about things, but it can certainly imitate a human who reasons deeply. I've never seen any technology improve at this rate.
That sounds like the recommended approach. However, there's one more thing I often do: whenever Claude Code and I complete a task that didn't go well at first, I ask CC what it learned, and then I tell it to write down what it learned for the future. It's hard to believe how much better CC has become since I started doing that. I ask it to write dozens of unit tests and it just does. Nearly perfectly. It's insane.
Please help me understand better, because it feels like part of the problem has already been solved. Specifically, I've been told that the independent journalists that I watch on YouTube Premium receive a portion of my subscription fee. Is that not a form of micropayments? The system seems to work well enough for videos. Isn't there some way to adapt that kind of system to other media?
The solution is called centralization by a middle man that takes a massive cut - eg YouTube Premium. Only Google makes real money off that, and the content creators rely on sponsors instead for their own revenue. So does it really work? I would despise a future where we solve micro transactions by giving up control to yet-another unnecessary body. Especially not even at the level of Visa or Mastercard, despite how much I dislike crypto.
No, that is absolutely 100% not micropayments, as the consumer is not paying per view/article/video whatever. They're paying a fixed fee and are not metered.
Good to know. Now I think I know why micropayments for news media never took off: because people who want to read news media probably don't want to waste mental cycles on keeping track of a micropayments account. They want a set-and-forget solution with a predictable cost. If micropayments can't fit those expectations, then the market probably wants something other than the thing we're calling micropayments.
Goes like the following:
Google/YouTube have a userbase to track accounts for; they go to a bank (licensed money transmitter, with OFAC/KYC/AML programs implemented). Google gets paid by people looking to advertise, and that money goes into Google's master account. Google's finance system translates views/impressions to money movements to creator accounts hosted at other banks (same deal, OFAC/KYC/AML program in place). The main thing is, every party that actually moves around money, operates in such a way that the entire transaction chain is followable. It's not point to point, it's hub and spoke. The hubs keep track of everything to keep the Osama Bin Laden's or Russian Oligarch's, or Cuban nationals out of the U.S. financial system.
"Micropayments" have always been something different. We technologists just figured there would be a way we could whip up some accounting software, or a spec, and allow people a way to store and transact without relying on a custodial holder, with all the extra regulation burden. Point is though, government and law enforcement don't want that, because with that, it becomes a great deal more difficult to follow the money, or to get away with things like mandating everyone report money movements over some amount to the tax authority; something easy to do when it's tacked on to the condition of maintaining your license to do business. Every money transmitter being well behaved and integrated with the state maximizes the risk for anyone attempting to utilize the financial system for illegal activity.
Ergo... What you think of as already solved isn't "micropayments". It's traditional finance in the U.S. What we refer to when we say Micropayments, is a way to store value, maintain accounts, and run point to point transactions "blessed" or recognized by the world et al without an intermediary.
Nope! That's the fun part! All the misery from the downsides, none of the upside! But imagine how much worse it could be! /s
There's a reason I'm doing anything possible to avoid going back into finance. I never developed the knack to just sit back quietly doing stupid things that don't work for the purpose everyone says it's for.
I imagine other languages have similar libraries. I would say static typing in scripting languages has arrived and is here to stay. It's a huge benefit for large code bases.
That's for messages. The discussion was about email _addresses_. The former logically makes sense as an object, but the latter can easily be implemented as a raw string, hence the discussion.
Ha ha, well that's a relief. I thought the article was going to say that enabling TCP_NODELAY is causing problems in distributed systems. I am one of those people who just turn on TCP_NODELAY and never look back because it solves problems instantly and the downsides seem minimal. Fortunately, the article is on my side. Just enable TCP_NODELAY if you think it's a good idea. It apparently doesn't break anything in general.
Erasure coding is an interesting topic for me. I've run some calculations on the theoretical longevity of digital storage. If you assume that today's technology is close to what we'll be using for a long time, then cross-device erasure coding wins, statistically. However, if you factor in the current exponential rate of technological development, simply making lots of copies and hoping for price reductions over the next few years turns out to be a winning strategy, as long as you don't have vendor lock-in. In other words, I think you're making great choices.
I question that math. Erasure coding needs less than half as much space as replication, and imposes pretty small costs itself. Maybe we can say the difference is irrelevant if storage prices will drop 4x over the next five years? But looking at pricing trends right now... that's not likely. Hard drives and SSDs are about the same price they were 5 years ago. The 5 years before that SSDs were seeing good advancements, but hard drive prices only advanced 2x.
Sure, but LLMs are trying to build the algorithms of the human mind backwards, converge on similar functionality based on just some of the inputs and outputs. This isn't an efficient or a lossless process.
The fact that they can pull it off to this extent was a very surprising finding.
Letters with rounded terminals are especially popular for public signage in a few Asian countries, e.g. Japan and Korea.
That is why Microsoft Windows has included such a rounded font for the Korean script: Gulim. On Windows, if you want to render a text with Latin letters with rounded ends, you can use Gulim for the normal text, coupled with Arial Round for the bold text.
On MacOS, there was a Hiragino Maru Gothic rounded font for Japanese (where also the Latin letters are rounded). I no longer use Apple computers, so I do not know whether the Hiragino fonts have remained the fonts provided for Japanese.
reply