> I've found it tough to talk about being a solo bootstrapper though. People don't seem all that interested in it [...]
I think you've hit the nail on the head (solo bootstrapper here): People are not interested because A) it's not about them, it's about you, B) it sounds somewhat scary, C) it sounds completely detached from their reality of corporate jobs, and finally D) it's scary because your life might be "better" than theirs.
I don't tell people about my work anymore, and almost nobody ever asks, except for other entrepreneurs/bootstrappers.
As a European: nice, but why is it so BIG? How is that monster called "mid-size"? Why would one want to haul so many tons of extra metal around just to transport one's behind?
As a European (dutch), why are our roads so small, that normal sized cars look like "monsters". I have often thought, that Europe will have a problem in the future with roads, as they are just too small, and expanding and making them more safe, is unlikely to ever happen and often times impossible. Not everyone can get by with a small little hatchback, some of us need a big pickup (I own a building company). And for the people that do not need it from a commercial point of view, have you ever considered that people have hobbies and some hobbies needs a fair amount of space in a car? Or families with multiple kids doing sport need the space for all the gear?
I am worried that in the future, more and more european cities will just address the problem with a disguised "we are making the cities car free, and thus greener and safer". What that means for the average citizen out there is, that any building related work, will just become more expensive, as people will just charge more to get over the hassle of getting into the cities then.
I'm glad the roads are small. Smaller roads cause slower driving (well researched). As for the cities, it is unsustainable to use cars as the primary mode of transportation within cities. We do want to make cities greener and largely car-free, because cars for individuals simply do not make any sense in a city. We still need roads for deliveries and occasional transportation of heavy or large goods, but transporting yourself within a city should rarely be done in a car. See Tokyo for an example of a large metropolis which functions well and which would completely break down if everybody tried to use a car to get somewhere.
This argument does not work when society is build towards roads and using them. I live in the Netherlands in a village, and not using a car is impossible with a few kids. The city is different.
Can and want to or being efficient are different things. I "can" travel around in a city using public transport with 3 kids and all their sporting equipment, do I want to, no. Would any sane person want to? No.
see, this is the narrow minded view of so many europeans. Well just go to a closer sports club....is not an answer to the problem that thousands of people experience with small cars, and small roads.
Many more thousands have no issues with small cars or going to a closer sports club.
If the roads in cities are wide enough in cities for literal trucks, then they're wide enough for your car. Widening roads and making cars bigger makes pretty much everyone less safe.
Don't get me wrong, you're free to live in the boonies and drive 400km to your sports club, but don't call me narrow minded because I can load up 5 people in my VW passat and drive 500km for a 10 day vacation, or because I prefer not to get bulldozed by a car with a higher hood than me while walking to my local sports club.
Some people need more space, but the road problem is something that can't be retrofitted without demolishing buildings.
As a Dutch person, surely you've seen that Amsterdam decided that the city's car problem in the 70s was unfixable and decided to switch to cycling. The building and delivery problem is real, but I don't think even a 10 euro/day charge for work vehicles would register given how expensive building work is already.
Land in cities is very expensive. Why should vehicles get to use more of it for free?
The R2 isn't even really a mid-size SUV. It is closer to a RAV4, which is considered a "compact SUV" or "crossover" [1]. Mid-size SUVs like the Honda Pilot tend to be even larger.
As a European, I’ve been gently looking forward to Rivian’s R3 for years now. I like the design and it looks much more like a machine that will suit Europe.
Because it's the middle size between the two reasonable car sizes that are being made today: gigantic and fucking enormous.
If you aren't buying at least the gigantic car, then you don't care about your kids safety and that's bad. How are you going to protect them from my gigantic SUV?
What? Walking?? No of course that's illegal! You want to navigate the street without a massive steel bubble? Are you nuts!
>As a European: nice, but why is it so BIG? How is that monster called "mid-size"?
Because it's a nominal size more than a descriptive one. Midsize is the second biggest size with only "full size" stuff being bigger.
It made more sense 30-40yr ago when people who remembered when the domestic auto makers mostly only made a full-size, a midsize and a compact car were still alive and of prime car buying age.
There are pros and cons to running the browser on your own machine
For example, with remote browsers you get to give your swarm of agents unlimited and always-on browsers that they can use concurrently without being bottlenecked by your device resources
I think we tend to default to thinking in terms of one agent and one browser scenarios because we anthropomorphize them a lot, but really there is no ceiling to how parallel these workflows can become once you unlock autonomous behavior
I appreciate that, but for the audience here on HN, I’m fairly certain we understand the trade offs or potentially have more compute resources available to us than you might expect the general user to have.
Offer up the locally hosted option and it’ll be more widely adopted by those who actually want to use it as opposed to tinker.
I know this may not fit into your “product vision”, however.
Congratulations, you did very well! And I mean it, as someone who has experience building hardware (also in China), you came out pretty much unscathed and did extremely well for a first timer. Good job!
As best as I can tell, there was less than 10 minutes from the last successful request I made and when the downtime was added to their status page - and I'm not particularly crazy with my usage or anything, the gap could have been less than that.
Honestly, that seems okay to me. Certainly better than what AWS usually does.
It appeared there like 5 minutes ago; it was down for at least 20 before that.
That's 20 minutes of millions of people visiting the status page, seeing green, and then spending that time resetting their context, looking at their system and network configs, etc.
It's not a huge deal, but for $200/month it'd be nice if, after the first two-thousand 500s went out (which I imagine is less than 10 seconds), the status page automatically went orange.
> I don't care if I sound old and salty when I say this: I miss phpBB
I'll one-up this: I miss USENET.
I never understood how anyone could like phpBB, compared to USENET news readers, it was a chaotic mess. But USENET, that was great for discussing things.
I remember using some forums, and there'd be pages and pages of idiots just replying "Wow, this is great, thanks OP!", or "Thanks from me too!". How the fuck do you think you're contributing rather than polluting?
And nowadays they can even create Github accounts and do this...
I first got on the Internet in 1991. The older students told me to lurk on Usenet and not post anything for a month or 2 to avoid getting flamed. I did and then I loved it. Once all the @aol.com people started showing up it went downhill. By 2000 it was so full of spam and garbage that I stopped going. I connected to a Usenet server last year for the first time in over 20 years and it was just full of junk.
Funny thing is, if we were to revive it now, it might end up as a pretty nice place, given that all the dumb crowds now have their 4chans, reddits, phpBBs, facebooks, instagrams, etc.
Relying on EXIF is a good thing. But if you limit yourself to ONLY using EXIF, you can't group images, make one image in a group the primary image, assign common metadata to the entire group, etc.
All turned out to be essential in my photo archives, especially as I started scanning old pictures. You get the front and back side of a photo, or you scan a large-format drawing in 16 scans and store them alongside the merged one, etc.
Aperture used to handle it pretty well, but Apple dropped it. I learned my lesson, and now I'll be doing things differently.
I solved the "one photo in multiple albums" EXIF problem by using Keywords. Each album is a Keyword.
But yes, there are some other limitations that would be much harder to solve. But it's a tradeoff I've decided to make - if I can't figure out an EXIF-based solution then I'm not going to invest time using it because it will likely be lost in 5-10 years.
> Aperture used to handle it pretty well, but Apple dropped it.
If you still miss it, note that Nitro (macOS, iPad, iPhone) is Aperture's spiritual successor, created by its former Sr. Director of Engineering. https://www.gentlemencoders.com/nitro-for-macos/
The title of this submission is misleading, that's not what they're saying. They said it doesn't show productivity gains for inexperienced developers still gaining knowledge.
The study measures if participants learn the library, but what they should study is if they learn effective coding agent patterns to use the library well. Learning the library is not going to be what we need in the future.
> "We collect self-reported familiarity with AI coding tools, but we do not actually measure differences in prompting techniques."
Many people drive cars without being able to explain how cars work. Or use devices like that. Or interact with people who's thinking they can't explain. Society works like that, it is functional, does not work by full understanding. We need to develop the functional part not the full understanding part. We can write C without knowing the machine code.
You can often recognize a wrong note without being able to play the piece, spot a logical fallacy without being able to construct the valid argument yourself, catch a translation error with much less fluency than producing the translation would require. We need discriminative competence, not generative.
For years I maintained a library for formatting dates and numbers (prices, ints, ids, phones), it was a pile of regex but I maintained hundreds of test cases for each type of parsing. And as new edge cases appeared, I added them to my tests, and iterated to keep the score high. I don't fully understand my own library, it emerged by scar accumulation. I mean, yes I can explain any line, but why these regexes in this order is a data dependent explanation I don't have anymore, all my edits run in loop with tests and my PRs are sent only when the score is good.
Correctness was never grounded in understanding the implementation. Correctness was grounded in the test suite.
You can, most certainly, drive a car without understanding how it works. A pilot of an aircraft on the other hand needs a fairly detailed understanding of the subsystems in order to effectively fly it.
I think being a programmer is closer to being an aircraft pilot than a car driver.
Sure, if you are a pilot then that makes sense. But what if you are a company that uses planes to deliver goods? Like when the focus shifts from the thing itself to its output
> Many people drive cars without being able to explain how cars work.
But the fundamentals all cars behave the same way all the time. Imagine running a courier company where sometimes the vehicles take a random left turn.
> Or interact with people who's thinking they can't explain
Sure but they trust those service providers because they are reliable . And the reason that they are reliable is that the service providers can explain their own thinking to themselves. Otherwise their business would be chaos and nobody would trust them.
How you approached your library was practical given the use case. But can you imagine writing a compiler like this? Or writing an industrial automation system? Not only would it be unreliable but it would be extremely slow. It's much faster to deal with something that has a consistent model that attempts to distill the essence of the problem, rather than patching on hack by hack in response to failed test after failed test.
But isn't the corrections of those errors that are valuable to society and get us a job?
People can tell they found a bug or give a description about what they want from a software, yet it requires skills to fix the bugs and to build software. Though LLMs can speedup the process, expert human judgment is still required.
If you know that you need O(n) "contains" checks and O(1) retrieval for items, for a given order of magnitude, it feels like you've all the pieces of the puzzle needed to make sure you keep the LLM on the straight and narrow, even if you didn't know off the top of your head that you should choose ArrayList.
Or if you know that string manipulation might be memory intensive so you write automated tests around it for your order of magnitude, it probably doesn't really matter if you didn't know to choose StringBuilder.
That feels different to e.g. not knowing the difference between an array list and linked list (or the concept of time/space complexity) in the first place.
My gut feeling is that, without wrestling with data structures at least once (e.g. during a course), then that knowledge about complexity will be cargo cult.
When it comes to fundamentals, I think it's still worth the investment.
To paraphrase, "months of prompting can save weeks of learning".
I think the kind of judgement required here is to design ways to test the code without inspecting it manually line by line, that would be walking a motorcycle, and you would be only vibe-testing. That is why we have seen the FastRender browser and JustHTML parser - the testing part was solved upfront, so AI could go nuts implementing.
I partially agree, but I don’t think “design ways to test the code without inspecting it manually line by line” is a good strategy.
Tests only cover cases you already know to look for. In my experience, many important edge cases are discovered by reading the implementation and noticing hidden assumptions or unintended interactions.
When something goes wrong, understanding why almost always requires looking at the code, and that understanding is what informs better tests.
Instead, just learning concepts with AI and then using HI (Human Intelligence) & AI to solve the problem at hand—by going through code line by line and writing tests - is a better approach productivity-, correctness-, efficiency-, and skill-wise.
I can only think of LLMs as fast typists with some domain knowledge.
Like typists of government/legal documents who know how to format documents but cannot practice law. Likewise, LLMs are code typists who can write good/decent/bad code but cannot practice software engineering - we need, and will need, a human for that.
I agree. It's very missleading. Here's what the authors actually say:
> AI assistance produces significant productivity gains across professional domains, particularly for novice workers. Yet how this assistance affects the development of skills required to effectively supervise AI remains unclear. Novice workers who rely heavily on AI to complete unfamiliar tasks may compromise their own skill acquisition in the process. We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI. We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library. We identify six distinct AI interaction patterns, three of which involve cognitive engagement and preserve learning outcomes even when participants receive AI assistance. Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation -- particularly in safety-critical domains.
I assistance produces significant productivity gains across professional domains, particularly for novice workers.
We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average.
Are the two sentences talking about non-overlapping domains? Is there an important distinction between productivity and efficiency gains? Does one focus on novice users and one on experienced ones? Admittedly did not read the paper yet, might be clearer than the abstract.
Not seeing the contradiction. The two sentences suggest a distinction between novice task completion and supervisory (ie, mastery) work. "The role of workers often shifts from performing the task to supervising the task" is the second sentence in the report.
The research question is: "Although the use of AI tools may improve productivity for these
engineers, would they also inhibit skill formation? More specifically, does an AI-assisted task completion workflow prevent engineers from gaining in-depth knowledge about the tools used to complete these tasks?" This hopefully makes the distinction more clear.
So you can say "this product helps novice workers complete tasks more efficiently, regardless of domain" while also saying "unfortunately, they remain stupid." The introductiory lit review/context setting cites prior studies to establish "ok coders complete tasks efficiently with this product." But then they say, "our study finds that they can't answer questions." They have to say "earlier studies find that there were productivity gains" in order to say "do these gains extend to other skills? Maybe not!"
The first sentence is a reference to prior research work that has found those productivity gains, not a summary of the experiment conducted in this paper.
In that case it should not be stated as a fact, it should then be something like the following.
While prior research found significant productivity gains, we find that AI use is not delivering significant efficiency gains on average while also impairing conceptual understanding, code reading, and debugging abilities.
That doesn't really line up with my experience, I wanted to debug a CMake file recently, having done no such thing before - AI helped me walk through the potential issues, explaining what I got wrong.
I learned a lot more in a short amount of time than I would've stumbling around on my own.
Afaik its been known for a long time that the most effective way of learning a new skill, is to get private tutoring from an expert.
This highly depends on your current skill level and amount of motivation. AI is not a private tutor as AI will not actually verify that you have learned anything, unless you prompt it. Which means that you must not only know what exactly to search for (arguably already an advanced skill in CS) but also know how tutoring works.
I agree the title should be changed, but as I commented on the dupe of this submission learning is not something that happens as a beginner, student or "junior" programmer and then stops. The job is learning, and after 25 years of doing it I learn more per day than ever.
I successfully use Claude Code in a large complex codebase. It's Clojure, perhaps that helps (Clojure is very concise, expressive and hence token-dense).
Perhaps it's harder to "do Closure wrong" than it is to do JavaScript or Python or whatever other extremely flexible multi-paradigm high-level language
Having spent 3 years of my career working with Clojure, I think it actually gives you even more rope to shoot yourself with than Python/JS.
E.g. macros exist in Clojure but not Python/JS, and I've definitely been plenty stumped by seeing them in the codebase. They tend to be used in very "clever" patterns.
On the other hand, I'm a bit surprised Claude can tackle a complex Clojure codebase. It's been a while since I attempted using an LLM for Clojure, but at the time it failed completely (I think because there is relatively little training data compared to other mainstream languages). I'll have to check that out myself
> EV have much higher emissions of micro plastics and pfas (or variations thereof) due to increased tier degradation
I find those claims highly suspect: I own an EV and haven't had to change the tires more often than I did on a gasoline-powered car. My EV bought in 2021 still runs on original tires and they're fine (although I do change from winter to summer tires, so that's 2 sets technically).
I suspect black PR, and there is always a grain of truth in black PR: emissions are indeed likely to be higher. Probably not "much higher" and probably not in a way that really matters.
Just because a tire lasts as long doesn't mean it isn't wearing in different ways. EV specific tires are a lot different than their ICE counterparts.
This isn't "black PR". It's comparing apples and oranges. But throw non-EV tires on one and you'll definitely chew those tires up much more quickly [0][1][2][3].
The class of the Ioniq 5 isn't lighter than it's ICE competitors. It may be lighter than a larger SUV, but the tire changes drastically as the GVWR increases.
An Ioniq 5 can weigh over 1000lbs more than a Honda CR-V, for example (depending on trim & battery).
I think you've hit the nail on the head (solo bootstrapper here): People are not interested because A) it's not about them, it's about you, B) it sounds somewhat scary, C) it sounds completely detached from their reality of corporate jobs, and finally D) it's scary because your life might be "better" than theirs.
I don't tell people about my work anymore, and almost nobody ever asks, except for other entrepreneurs/bootstrappers.
reply