I'm happy to let people think that AI does not yield productivity gains. There is no point engaging on this topic, so I will just outwork/outperform them.
I now have the pleasure of giving exercises to candidates where they are explicitly allowed to use any AI or autocomplete that they want, but it's one of those tricky real-world problems where you'll only get yourself into trouble if you only follow the model's suggestions. It really separates the builders from the bureaucrats far more effectively than seeing who can whiteboard or leetcode.
Its kind of a trap, we allow people in interviews to do the same and some of them waste so much time accepting wrong LLM completions and then changing them than if they'd just written the code themselves.
Ive been doing this inadvertently for years by making tasks that were as realistic as possible - explicitly based upon the code the candidate will be working upon.
As it happens, this meant when candidates started throwing AI at the task, instead of performing that magic it usually can when you make it build a todo app or solve some done-to-death irrelevant leetcode problem it flailed and left the candidate feeling embarrassed.
I really hope AI signals the death knell of fucking stupid interview problems like leetcode. Alas many companies are instead knee jerking and "banning" AI from interview use instead (even claude, hilariously).
Id be seeing if the candidate actually understanding what the llm is spitting out and pushing back when it doesn't make sense vs are they one of the "infinite monkeys on infinite typewriters"
IF (and a big IF) LLMs are the future of coding, this doesn't mean humans don't do anything, but the role has changed from author to editor. Maybe you don't need to create the implementation but you sure better know how to read and understand it.
Not sharing what our coding questions are, but we also allow LLMs now. Interviewees choice to do so.
In quite a few interviews in the last year I have come away convinced that they would have performed far better if they had relied on their own knowledge/experience exclusively. Fumbling with windows/tabs, not quite reading what they are copying, if I ask why they chose something, some of them would fold immediately and opt for something way better or more sensible, implying they would have known what to do had they bothered to actually think for a moment.
I rolled out a migration to 60+ backends by using Claude code to manage it in the background. Simultaneously, I worked on other features while keeping my usual meeting load. I have more commits and releases per week than I have had in my whole career, which is objectively more productive.
Sometimes when I read such meaningless things my first reaction is to feel like I'm too ignorant to understand what the person says.
But then when I really think about it usually they're just bullshitting out being purposefully vague, using terms that don't mean anything precise in order to avoid actual criticism.
I question your assertion that more commits and releases per week is more productivity. There could be unexpected effects from your commits that create more work for you or for others and that could be hard to quantify.
Doing bad things faster might feel more productive to you, but it doesn’t mean that you are delivering more value. You might be, but the metrics you have shared to not prove that.
The issue I have with comments like this one is the one-dimensional notion of value described as "productivity gains" for a single person.
There are many things in this world that could be fairly described as "more productive" or "faster" than the norm, yet few people would argue that it makes those things a net benefit. You can lie and cheat your way to success, and that tends to be successful too. There are good reasons society frowns on this.
To me, focusing only on "I'm more productive" while ignoring the systemic and societal factors impacted by that "productivity" is completely missing the forest for the trees.
The fact that you further feel that there isn't even a point in engaging on the topic is disturbing considering those ignored factors.
The person above literally pitches an unsupported belief against a study (however flawed it may be). If that doesn't support my skepticism, I don't know what does.
When you've lived with AI boosting your productivity for a year or more it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
>it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
In my youth, I would have argued this was bad. Now, I tend to agree. Not that studies are worthless; but they are just part of the accumulation of evidence, and when they contradict a clear result you are directly seeing, you need to weight the evidence appropriately.
(Obviously, replicated studies showing clear effects should be more heavily weighted.)
When you've lived with -AI- stimulants boosting your productivity for a year or more it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
we don't demand every developer pop Adderall though
> When you've lived with AI boosting your productivity for a year or more it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
Me: The person above literally pitches an unsupported belief against a study
You: it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".
Really? Really?!!
As for "boosting your productivity", it's also what I'm talking about in the article I linked:
--- start quote ---
For every description of how LLMs work or don't work we know only some, but not all of the following:
- Do we know which projects people work on? No
- Do we know which codebases (greenfield, mature, proprietary etc.) people work on? No
- Do we know the level of expertise the people have? No. Is the expertise in the same domain, codebase, language that they apply LLMs to? We don't know.
- How much additional work did they have reviewing, fixing, deploying, finishing etc.? We don't know.
Even if you have one person describing all of the above, you will not be able to compare their experience to anyone else's because you have no idea what others answer for any of those bullet points.
--- end quote ---
So what happens when we actually control and measure those variables?
Wait, don't answer: "no, it's easier to believe yourself over a study".
See? Skeptics don't even have to "jump on anything that supports their skepticism." Even you supply them with material.
What I'm willing to assert as fact, based not just on my own experiences (though they're a major role) but on observing this space for several years and talking to literally hundreds of people, is that LLMs can provide you a very real productivity boost in coding if you take the time to learn how to use them - or if you get lucky and chance upon the most productive patterns.
You remind me of the haskell hype of ~2016-2018 where the community wrote tons of blog posts and passionate comments on HN about the theory of types and the "productivity boost" of their language while simultaneously producing a paltry output of actual useful software in the hands of actual users.
Im sure they were completely genuine in how they felt, just as i am sure you are too.
I don’t think the snark is warranted here. The person you’re responding to is linking an article dealing with that issue and the title is what they mentioned.
Edit: SimonW? Really? I didn’t see the name but I didn’t expect you to be like that.
I don't think the response from troupo that nayshins's personal experience is invalidated by a "randomised controlled trial" was well argued, so I imitated what I saw as their snarky wording with my own reworded version of it.
I do take the "AI isn't actually a productivity boost" thing a little bit personally these days, because the logical conclusion for that is that I've been deluding myself for the past two years and I'm effectively a victim of "magical thinking".
(That said, I did actually go to delete my comment shortly after posting it because I didn't think it added anything to the conversation, but it had already drawn a reply so I left it there.)
> because the logical conclusion for that is that I've been deluding myself for the past two years and I'm effectively a victim of "magical thinking".
You may just as well have. I, for one, am absolutely ready to re-evaluate any and all approaches I have with AI to see if I am actually more productive or not.
But moreover, your own singular experience with your own code and projects may make you more productive. We don't know if it does because we don't have a baseline against which to measure.
But even moreover over that moreover is that we don't even have a question "does a single senior engineer's experience with his own code and approaches can be generalised over the entire population of programmers?" Skeptics say: no (and now have some proof of that). Optimists loudly say: yes, of course, and dismiss everyone who dares contradict out of hand.
- Is "the issue" that anyone who claims any productivity gain is using magical thinking?
- How does the linked article "deal with" "the issue"?
- What title did they mention?
- What did they link to that has that title?
> Edit: SimonW? Really? I didn’t see the name but I didn’t expect you to be like that.
Like what? I think you're getting a bit emotional & personal here, I don't read anything remotely inappropriate into Simon's comment. Been here 15 years. OP's was odd for HN in that it admits 0 argument: if you think you have productivity gains, it's magical thinking.
So I understand how people can get emotional about this topic but really quick let’s calm down and reevaluate what I said.
My comment was in response to Simon’s reply to a user who posted an article. The title of the article they posted addresses magical thinking in AI.
Now whether that’s an opinion you share or not is not the point. Simon responded as if the user was only calling any perceived gains from AI as magical thinking which is not the case.
I’ll let you come back to that when you feel like it. Altogether, though it’s just disappointing to see someone who’s work I read often jumping to an emotional response when it’s not warranted.
Gosh, I was conflicted, then you pulled out that sentence and I was convinced. :)
Alternatively: When faced with a contradiction, first, check your premises.
I don't want to belabor the point too much, there's little common ground if we're at all or nothing thinking - "the study proved AI is net-negative because of this pull quote" isn't discussion.
ive watched a lot of people code with cursor, etc. and i noticed that they seem to get a rush when it occasionally does something amazing that more than offsets their disappointment when it (more often) screws up.
the psychological effect reminds me a bit of slot machines, which provide you with enough intermittent wins to make you feel like you're winning while youre lose.
I think this might be linked to that study that found experienced oss devs who thought they were faster when they were in actual fact 20% slower.
If we assume that AI coding actually increases productivity of a programmer without side effects (which of course is a controversial assumption, but not affecting the actual question):
1) If you are a salaried employee, if you are seen as less productive than your colleagues that use AI, at the very least you won't be valued as much. Either you will eventually earn less than your colleagues or be made redundant.
2) If you are a consultant, you'll be able to invoice more work in the same amount of time. Of course, so will your competitors, so that rates for a set amount work will probably decrease.
3) If you are an entrepreneur, you will be able to create a new product hiring less people (or on your own). Of course, so will your competitors, so that the expectations for viable MVPs will likley be raised.
In short, if AI coding assistants actually make a programmer more productive, you will likely have to learn to live with it in order to not be left behind.
This is only true if the degree to which they increase productivity meaningfully rises above the level of noise.
That is to say: "Productivity" is notoriously extremely hard to measure with accuracy and reliability. Other factors such as different (and often terrible) productivity measures, nepotism/cronyism, communication skills, self-marketing skills, and what your manager had for breakfast on the day of performance review are guaranteed to skew the results, and highly likely, in what I would guess is the vast majority of cases, to make any productivity increases enabled by LLMs nearly impossible to detect on a larger scale.
Many people like to operate as if the workplace were a perfectly efficient market system, responding quickly and rationally to changes like productivity increases, but in fact, it's messy and confusing and often very slow. If an idealized system is like looking through a pane of perfectly smooth, clear glass, then the reality is, all too often, like looking through smudgy, warped, clouded bullseye glass into a room half-full of smoke.
The problem is that it doesn't actually matter if it really makes a programmer more productive or not.
Because productivity is hard to measure, if we just assume that using AI tools is more productive we're likely to be making stupid choices
And since I strongly think that AI coding is not making me personally more productive it puts me in a situation where I have to behave irrationally in order to show employers that I'm a good worker bee
I am increasingly feeling trapped between a losers choice. I take the mental anguish of using AI tools against my vetter judgment or I take the financial insecurity (and associated mental anguish) of just being unemployed
Turns out that I spend a lot of my spare time teaching others how to use them effectively. My goal is to help as many people become experts at this as I can.
From Wikipedia: "Naked short selling, or naked shorting, is the practice of short-selling a tradable asset of any kind without first borrowing the security or ensuring that the security can be borrowed, as is conventionally done in a short sale." It is really easy to lose your shirt with a naked short.
Naked shorting doesn't do much to exacerbate the downside of shorting a security. Even if you have borrowed the security, you still have an unlimited downside (i.e. the stock price _can_ go to infinity).
Naked shorting is more of a systemic problem because you can put downward pressure on a stock even when the market is not willing to sell at the price you are pretending to sell at. It also makes it hard to keep track of voting shares and you can see many more votes than actual shares.
Look at the agreement you sign when opening a brokerage account. You give the brokerage firm the right to loan out any shares of stock you own. I believe this only happens in accounts that have margin (you do have to have a margin account to short), but I am not entirely certain.