I won't try to speak for anyone other than myself, but my multiplier is definitely over 1.5x, probably higher than 5x.
I choose to sit on my hands in my freed up time so upper management does not catch on to and exploit this fact. Eventually they will though via overzealous coworkers.
It’s easy to produce a high volume of code, sure, but it is not equally easy to test, verify, and integrate it. And with a high volume of code, there is a high volume of shit to review & test & integrate. For companies that give a shit about not vibe coding their way into a disaster (because they have lucrative enterprise contracts that depend on reliability & security), that’s the real blocker. (Plus, these types of projects are big, not trivial, and things are harder to integrate & properly test because of that.)
Not to mention, if a team wants to keep a semblance of understanding of what they own & ship… it can be exhausting to have a huge volume of new code coming into the system.
It’s definitely a productivity unlock. For sure. But there are a lot of knock-on effects we’re still figuring out that counteract how much extra “value” we’re shipping
In my case, the volume of code is roughly the same. I'm not using the efficiency towards pumping out more code, just using it to be AFK more.
I spend enough time iterating and refining to the point I'm comfortable taking ownership of the outputted code. Perhaps hypocritically, I do mald when people upload code for review that they clearly haven't taken the effort to read through critically.
People with a lower multiplier are either in the minority of developers solving genuinely hard/novel problems or, more likely, they've just not figured out how to tap into AI's potential.
Granted, to your point, a decent chunk of the HN crowd belongs to the former and can't relate to us paycheck stealers.
I always hear people say this, but it’s not clear to me what exactly is so difficult about using AI that otherwise-competent developers “can’t figure it out”
My probably incorrect, uninformed hunch is that users convinced of how AI should act actually end up nerfing its capabilities with their prompts. Essentially dumbing it down to their level, losing out on the wisdom it's gained through training.
I've experienced in times of gpt 3, and 3.5 that existence of any, even 1-word system message changed output drastically in the worse direction. I did not verify this behaviour with recentl models.
Since then I don't impose any system prompts on users of my tg bot. This is so unusual and wild in relation to what others do that very few actually appreciate it. I'm happy I don't need to make money for living with this project thus I and can keep it ideologically clean: user's control over system prompts, temperature, top_p, giving selection of the top barebones LLMs.
I often wonder this as well, things are moving so quickly that unless you want to keep chasing the next best prompt/etc then you are better running as close to vanilla as you can IMHO.
Similar for MCP/Skills/Prompts, I’m not saying they can’t/don’t help but I think you can shoot yourself in the foot very easily and spend all your time trying to maintain those things and/or try to force the agent to use your Skill/MCP. That or having your context eaten up with bad MCP/Skills.
I read a comment the other day about sometime talking about Claude Code getting dumber then they went on to explain switching would be hard due to their MCP/Skills/Skill router setup. My dude, maybe _that’s_ the problem?
Huh. I'd never thought of this. If that is actually meaningfully beneficial, I wonder if they'd design self driving cars with the seats facing backwards, given there's no longer a necessity to look at the road.
(edit: I guess it's more of no-brainer on a train/bus where you don't have a seat belt)
Not the author, but I think there was some research and it's indeed better for you if you have head support, to be facing back towards the front. If prevents a whole range of injuries, from your neck, to becoming a projectile yourself.
But it's really theoretical, and does not account for the passenger in front of you headed head-first into your throat.
PS: I laughed hard that xlbuttplug2 is answering to deadbabe. The internet lives!
Consider the "booth seats" in trains and busses. So people can chat etc facing each other. If you've got a waymo with your friends why wouldn't you want the seats facing each other so you can be social, excluding this safety factor.
Sitting backwards is beneficial if looking at accidents.
But sitting backwards is very very uncomfortable if there is any kind of uneven acceleration, bumps, swaying, rolling, curvy tracks or whatever. Humans need to look forward at the horizon to get their visual stimuli aligned with their motion/balance sense in the inner ear. If that alignment isn't there, you will get seasick. Backwards makes this even worse.
Babies don't suffer from this, because closing your eyes helps, and infants don't have as strong a reaction to motions anyways, due to them usually being carried by their parents until walking age. So reverse baby seats only work for babies.
That's a serious overgeneralization. It's true for some people, but trains mostly don't bump and swerve enough for that to be a significant problem. Finnish trains have lots of seats facing backwards and while they're not anywhere as fast as something like a TGV, they're still often going 200+ km/h. People seem to be just fine. I just spent 1 hour 40 minutes yesterday sitting backwards, mostly reading a book, with no ill effects.
Infant car seats face backwards, they recommend backwards facing for a long as possible (until the kid is too big to fit comfortably in a backwards facing position).
It's incredibly beneficial. However many people dislike it and want to be facing the direction they are moving in, so best case is probably a train-style 4-seater. Which 2 seats facing forward and 2 backwards.
I would have no problem with that. I wouldn't maybe call the AI an artist though, it wouldn't have sentient knowledge to be an artist. It would be art made by a machine. In fact, we have several of those examples already, and there's lots there are really fun and appreciated out there. This new one just happens to be quite more complex and eerie to digest at first.
> They can and most likely will release something that vaporises the thin moat you have built around their product.
As they should if they're doing most of the heavy lifting.
And it's not just LLM adjacent startups at risk. LLMs have enabled any random person with a claude code subscription to pole vault over your drying up moat over the course of a weekend.
LLMs by their very nature subsume software products (and services). LLM vendors are actually quite restrained - the models are close to being able to destroy the entire software industry (and I believe they will, eventually). However, at the moment, it's much more convenient to let the status quo continue, and just milk the entire industry via paid APIs and subscriptions, rather than compete with it across the board. Not to mention, there are laws that would kick in at this point.
I think the function of a company is to address limitations of a single human by distributing a task across different people and stabilized with some bureaucracy. However, if we can train models past human scales at corporation scale, there might be large efficiency gains when the entire corporation can function literally as a single organism instead of coordinating separate entities. I think the impact of this phase of AI will be really big.
> the models are close to being able to destroy the entire software industry
Are you saying this based on some insider knowledge of models being dramatically more capable internally, yet deliberately nerfed in their commercialized versions? Because I use the publicly available paid SOTA models every day and I certainly do not get the sense that their impact on the software industry is being restrained by deliberate choice but rather as a consequence of the limitations of the technology...
I don't mean the companies are hoarding more powerful models (competition prevents that) - just that the existing models already make it too easy for individuals and companies to build and maintain ad-hoc, problem-specific versions of many commercial software services they now pay for. This is the source of people asking, why haven't AI companies themselves done this to a good chunk of software world. One hypothesis is that they're all gathering data from everyone using LLMs to power their business, in order to do just that. My alternative hypothesis is that they already could start burning through the industry, competing with whole classes of existing products and services, but they purposefully don't, because charging rent from existing players is more profitable than outcompeting them.
It’s something normal people understand - everyone who uses a desktop/laptop computer will have rearranged an icon. If they read this it will likely trigger some thoughts about what it could do for them.
It is still not clear to me. The periodicity of their orbit around the tree is the same. I think this is an instance of us meaning different things by “go around”
Say instead of just walking, the man was laying down a net/barricade around the tree. As soon as the man completes the circumference, the squirrel must admit that it has been gone around.
Now let us suppose the squirrel is at the same distance as the man.
Has the man have gone around the squirrel and the squirrel around the man?
If it's only radii less than the other, where is the limit?
To get it I think I have to re-frame it like this:
If you hold out an object toward the centre, you clearly go around it when completing an orbit.
If you keep extending that to the origin but then go beyond, so your arm is longer than the radius, then you still go around it, until your arm reaches twice the radius.
I won't try to speak for anyone other than myself, but my multiplier is definitely over 1.5x, probably higher than 5x.
I choose to sit on my hands in my freed up time so upper management does not catch on to and exploit this fact. Eventually they will though via overzealous coworkers.
reply