Hacker Newsnew | past | comments | ask | show | jobs | submit | nayshins's commentslogin

Many have seen the writing on the wall, but many are refusing to believe it.


I'm happy to let people think that AI does not yield productivity gains. There is no point engaging on this topic, so I will just outwork/outperform them.


I now have the pleasure of giving exercises to candidates where they are explicitly allowed to use any AI or autocomplete that they want, but it's one of those tricky real-world problems where you'll only get yourself into trouble if you only follow the model's suggestions. It really separates the builders from the bureaucrats far more effectively than seeing who can whiteboard or leetcode.


Its kind of a trap, we allow people in interviews to do the same and some of them waste so much time accepting wrong LLM completions and then changing them than if they'd just written the code themselves.


Ive been doing this inadvertently for years by making tasks that were as realistic as possible - explicitly based upon the code the candidate will be working upon.

As it happens, this meant when candidates started throwing AI at the task, instead of performing that magic it usually can when you make it build a todo app or solve some done-to-death irrelevant leetcode problem it flailed and left the candidate feeling embarrassed.

I really hope AI signals the death knell of fucking stupid interview problems like leetcode. Alas many companies are instead knee jerking and "banning" AI from interview use instead (even claude, hilariously).


> but it's one of those tricky real-world problems where you'll only get yourself into trouble if you only follow the model's suggestions.

What's the goal of this? What are you looking for?


I presume, people who can code, as opposed to people who can only prompt an LLM.

In the real world, you hit problems that the LLM doesn't know what to do with. When that happens, are you stuck, or can you write the code?


Id be seeing if the candidate actually understanding what the llm is spitting out and pushing back when it doesn't make sense vs are they one of the "infinite monkeys on infinite typewriters"


IF (and a big IF) LLMs are the future of coding, this doesn't mean humans don't do anything, but the role has changed from author to editor. Maybe you don't need to create the implementation but you sure better know how to read and understand it.


That's really interesting... can you give more details about the problem you are using?

This sounds like in there will be a race between this kind of booby trap tests and AIs learning them.


Long-tail problems are not reiterated in the dataset. Making LLM remember that can be difficult.


Some code challenge platforms allow for seeing how often someone pasted things in. That's been interesting.


Interesting, care to elaborate? Or this is a carefully guarded secret?


Not sharing what our coding questions are, but we also allow LLMs now. Interviewees choice to do so.

In quite a few interviews in the last year I have come away convinced that they would have performed far better if they had relied on their own knowledge/experience exclusively. Fumbling with windows/tabs, not quite reading what they are copying, if I ask why they chose something, some of them would fold immediately and opt for something way better or more sensible, implying they would have known what to do had they bothered to actually think for a moment.

I put down "no hire" for all of them of course.


If you are so happy to let people think that AI does not yield productivity gains, why comment here?

How exactly did you outperform? Show, don't talk.


I rolled out a migration to 60+ backends by using Claude code to manage it in the background. Simultaneously, I worked on other features while keeping my usual meeting load. I have more commits and releases per week than I have had in my whole career, which is objectively more productive.


> I rolled out a migration to 60+ backends

How is anyone supposed to understand what this means?

Given the ambiguity in your description and lack of actual code it’s hard to take you seriously.


Sometimes when I read such meaningless things my first reaction is to feel like I'm too ignorant to understand what the person says.

But then when I really think about it usually they're just bullshitting out being purposefully vague, using terms that don't mean anything precise in order to avoid actual criticism.


I question your assertion that more commits and releases per week is more productivity. There could be unexpected effects from your commits that create more work for you or for others and that could be hard to quantify.

Doing bad things faster might feel more productive to you, but it doesn’t mean that you are delivering more value. You might be, but the metrics you have shared to not prove that.


If you are producing much lower quality then no, it is not objectively more productive


If you find yourself working at the same place as me, feel free to judge my work, but until then, enjoy speculating.


The issue I have with comments like this one is the one-dimensional notion of value described as "productivity gains" for a single person.

There are many things in this world that could be fairly described as "more productive" or "faster" than the norm, yet few people would argue that it makes those things a net benefit. You can lie and cheat your way to success, and that tends to be successful too. There are good reasons society frowns on this.

To me, focusing only on "I'm more productive" while ignoring the systemic and societal factors impacted by that "productivity" is completely missing the forest for the trees.

The fact that you further feel that there isn't even a point in engaging on the topic is disturbing considering those ignored factors.


> I'm happy to let people think that AI does not yield productivity gains.

vs.

--- start quote ---

In a randomised controlled trial – the first of its kind – experienced computer programmers could use AI tools to help them write code.

--- end quote ---

Your quote is very representative of the magical wishful thinking most people have about AI: https://dmitriid.com/everything-around-llms-is-still-magical...


"Your quote is very representative of the magical wishful thinking most people have about AI"

Your comment here is very representative of how quickly people who are AI skeptics will jump on anything that supports their skepticism.


The person above literally pitches an unsupported belief against a study (however flawed it may be). If that doesn't support my skepticism, I don't know what does.


When you've lived with AI boosting your productivity for a year or more it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".


>it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".

In my youth, I would have argued this was bad. Now, I tend to agree. Not that studies are worthless; but they are just part of the accumulation of evidence, and when they contradict a clear result you are directly seeing, you need to weight the evidence appropriately.

(Obviously, replicated studies showing clear effects should be more heavily weighted.)

Everything is just shifting odds.


When you've lived with -AI- stimulants boosting your productivity for a year or more it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".

we don't demand every developer pop Adderall though


Don't give the management any ideas.


> When you've lived with AI boosting your productivity for a year or more it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".

Me: The person above literally pitches an unsupported belief against a study

You: it's pretty easy to believe your own experience over even a well-constructed "randomised controlled trial".

Really? Really?!!

As for "boosting your productivity", it's also what I'm talking about in the article I linked:

--- start quote ---

For every description of how LLMs work or don't work we know only some, but not all of the following:

- Do we know which projects people work on? No

- Do we know which codebases (greenfield, mature, proprietary etc.) people work on? No

- Do we know the level of expertise the people have? No. Is the expertise in the same domain, codebase, language that they apply LLMs to? We don't know.

- How much additional work did they have reviewing, fixing, deploying, finishing etc.? We don't know.

Even if you have one person describing all of the above, you will not be able to compare their experience to anyone else's because you have no idea what others answer for any of those bullet points.

--- end quote ---

So what happens when we actually control and measure those variables?

Wait, don't answer: "no, it's easier to believe yourself over a study".

See? Skeptics don't even have to "jump on anything that supports their skepticism." Even you supply them with material.


I'm not disputing that different people have radically different experiences of how much productivity boost they can get out of working with LLMs.

I've been banging this drum for over a year now: LLMs are deceptively difficult and uninituitive to use. Just one example: https://simonwillison.net/2025/Mar/11/using-llms-for-code/

What I'm willing to assert as fact, based not just on my own experiences (though they're a major role) but on observing this space for several years and talking to literally hundreds of people, is that LLMs can provide you a very real productivity boost in coding if you take the time to learn how to use them - or if you get lucky and chance upon the most productive patterns.

EDIT: I just saw you're the author of https://dmitriid.com/#everything-around-llms-is-still-magica... - that was a great piece! I think I may actually agree with you. I misinterpreted "magical thinking" as referring to something else.


> that was a great piece!

Thank you!

> I think I may actually agree with you.

I was just going to write "see, you actually agree with me", but got hit by the reply rate limit :)

And I agree with >90% of what you write, so I was surprised that this bout took us to weird places.


These fucking people sound like clickbait ads trying to sell me grift pills lmao. Go into the dietary supplement business instead, you’ll do well.


You remind me of the haskell hype of ~2016-2018 where the community wrote tons of blog posts and passionate comments on HN about the theory of types and the "productivity boost" of their language while simultaneously producing a paltry output of actual useful software in the hands of actual users.

Im sure they were completely genuine in how they felt, just as i am sure you are too.


I don't know if you're doing a parody of study-ism, or, if you're a bit ashamed to be doing it.

Either way, it's hard to interlocute if your misreading is an absolute conclusion that cannot be argued, then transmutated warranted skepticism.


The converse is also true.


I don’t think the snark is warranted here. The person you’re responding to is linking an article dealing with that issue and the title is what they mentioned.

Edit: SimonW? Really? I didn’t see the name but I didn’t expect you to be like that.


I stand by what I said.

I don't think the response from troupo that nayshins's personal experience is invalidated by a "randomised controlled trial" was well argued, so I imitated what I saw as their snarky wording with my own reworded version of it.

I do take the "AI isn't actually a productivity boost" thing a little bit personally these days, because the logical conclusion for that is that I've been deluding myself for the past two years and I'm effectively a victim of "magical thinking".

(That said, I did actually go to delete my comment shortly after posting it because I didn't think it added anything to the conversation, but it had already drawn a reply so I left it there.)


That’s totally fair and I get it.

Working in security I often feel the same way and let’s be fair in the grand scheme of things it’s not that big of a deal.


> because the logical conclusion for that is that I've been deluding myself for the past two years and I'm effectively a victim of "magical thinking".

You may just as well have. I, for one, am absolutely ready to re-evaluate any and all approaches I have with AI to see if I am actually more productive or not.

But moreover, your own singular experience with your own code and projects may make you more productive. We don't know if it does because we don't have a baseline against which to measure.

But even moreover over that moreover is that we don't even have a question "does a single senior engineer's experience with his own code and approaches can be generalised over the entire population of programmers?" Skeptics say: no (and now have some proof of that). Optimists loudly say: yes, of course, and dismiss everyone who dares contradict out of hand.


Not OP, not sure what you mean but curious.

- Snark?

- Is "the issue" that anyone who claims any productivity gain is using magical thinking?

- How does the linked article "deal with" "the issue"?

- What title did they mention?

- What did they link to that has that title?

> Edit: SimonW? Really? I didn’t see the name but I didn’t expect you to be like that.

Like what? I think you're getting a bit emotional & personal here, I don't read anything remotely inappropriate into Simon's comment. Been here 15 years. OP's was odd for HN in that it admits 0 argument: if you think you have productivity gains, it's magical thinking.


So I understand how people can get emotional about this topic but really quick let’s calm down and reevaluate what I said.

My comment was in response to Simon’s reply to a user who posted an article. The title of the article they posted addresses magical thinking in AI.

Now whether that’s an opinion you share or not is not the point. Simon responded as if the user was only calling any perceived gains from AI as magical thinking which is not the case.

I’ll let you come back to that when you feel like it. Altogether, though it’s just disappointing to see someone who’s work I read often jumping to an emotional response when it’s not warranted.


Never meet your heros. Then again, if the worst is a bit of emotionally charged snark, of which we are all guilty of at some point, I think I'll live.


Very true and to be fair Simon’s response with his perspective is valid and understandable.


(not op)

Gosh, I was conflicted, then you pulled out that sentence and I was convinced. :)

Alternatively: When faced with a contradiction, first, check your premises.

I don't want to belabor the point too much, there's little common ground if we're at all or nothing thinking - "the study proved AI is net-negative because of this pull quote" isn't discussion.


ive watched a lot of people code with cursor, etc. and i noticed that they seem to get a rush when it occasionally does something amazing that more than offsets their disappointment when it (more often) screws up.

the psychological effect reminds me a bit of slot machines, which provide you with enough intermittent wins to make you feel like you're winning while youre lose.

I think this might be linked to that study that found experienced oss devs who thought they were faster when they were in actual fact 20% slower.


Crazy how productivity gains just lead to more work for you.


Crazy how people would let their managers know they could get more done, instead of getting the same amount done quicker and having more free time.


If I was genuinely getting more free time, I’d be more amenable to this line of thought, but RTO put the nail in that coffin.


more work is good for the soul... until it isnt


This is actually worth talking about imo

There is nothing in it for me, if I am more productive but earn the same and don't get any more time off

Why should I bother at that point?


If we assume that AI coding actually increases productivity of a programmer without side effects (which of course is a controversial assumption, but not affecting the actual question):

1) If you are a salaried employee, if you are seen as less productive than your colleagues that use AI, at the very least you won't be valued as much. Either you will eventually earn less than your colleagues or be made redundant.

2) If you are a consultant, you'll be able to invoice more work in the same amount of time. Of course, so will your competitors, so that rates for a set amount work will probably decrease.

3) If you are an entrepreneur, you will be able to create a new product hiring less people (or on your own). Of course, so will your competitors, so that the expectations for viable MVPs will likley be raised.

In short, if AI coding assistants actually make a programmer more productive, you will likely have to learn to live with it in order to not be left behind.


This is only true if the degree to which they increase productivity meaningfully rises above the level of noise.

That is to say: "Productivity" is notoriously extremely hard to measure with accuracy and reliability. Other factors such as different (and often terrible) productivity measures, nepotism/cronyism, communication skills, self-marketing skills, and what your manager had for breakfast on the day of performance review are guaranteed to skew the results, and highly likely, in what I would guess is the vast majority of cases, to make any productivity increases enabled by LLMs nearly impossible to detect on a larger scale.

Many people like to operate as if the workplace were a perfectly efficient market system, responding quickly and rationally to changes like productivity increases, but in fact, it's messy and confusing and often very slow. If an idealized system is like looking through a pane of perfectly smooth, clear glass, then the reality is, all too often, like looking through smudgy, warped, clouded bullseye glass into a room half-full of smoke.


The problem is that it doesn't actually matter if it really makes a programmer more productive or not.

Because productivity is hard to measure, if we just assume that using AI tools is more productive we're likely to be making stupid choices

And since I strongly think that AI coding is not making me personally more productive it puts me in a situation where I have to behave irrationally in order to show employers that I'm a good worker bee

I am increasingly feeling trapped between a losers choice. I take the mental anguish of using AI tools against my vetter judgment or I take the financial insecurity (and associated mental anguish) of just being unemployed


The question is what happens when you're half as productive as everyone around you.


Measured by lines of code written, no doubt...


Relevant Office Space movie scene: https://youtu.be/to_e1N4xovQ?si=V6EVep8WqJ8zWisG


Yep, I'm ready to be management


>> so I will just outwork/outperform them.

actually based on your own admission this is not what you're doing...


Green or naturally grown brown field projects?

People who boast about AI enhanced productivity seem to always forget to mention.


Until you will not have access to it and be outperformed by people used to thinking every day.


Yet here you are, engaging.


the art of the bait


Good luck getting paid more for your improved performance :D


If they're right in their belief that AI usage leads to significantly more performance, their compensation is that they will keep their job.


"You get to keep your job" is the worst consolation prize I can imagine


No one gets paid more to get the same job done, unless you count free time as compensation.


I get paid a lot already.


Then enjoy demonstrating your AI-supercharged productivity advantage over your inferior peers, I think they'll love that.


Turns out that I spend a lot of my spare time teaching others how to use them effectively. My goal is to help as many people become experts at this as I can.


Uh, and who are you, exactly?


> so I will just outwork/outperform them.

At the game of producing garbage slop? Probably yeah.



Trustless asset transfers are a real problem though. Being able to settle a transfer without an intermediary is a huge advancement.


I use these every day: https://shopfelixgray.com/. They look pretty good, and the slight magnification really helps with eye strain.


The Netty documentation really is terrible. Have you read Netty In Action? That book made the framework much clearer.


From Wikipedia: "Naked short selling, or naked shorting, is the practice of short-selling a tradable asset of any kind without first borrowing the security or ensuring that the security can be borrowed, as is conventionally done in a short sale." It is really easy to lose your shirt with a naked short.


Naked shorting doesn't do much to exacerbate the downside of shorting a security. Even if you have borrowed the security, you still have an unlimited downside (i.e. the stock price _can_ go to infinity).

Naked shorting is more of a systemic problem because you can put downward pressure on a stock even when the market is not willing to sell at the price you are pretending to sell at. It also makes it hard to keep track of voting shares and you can see many more votes than actual shares.


Aka "unlimited downside".


From whom does one do this borrowing?


Look at the agreement you sign when opening a brokerage account. You give the brokerage firm the right to loan out any shares of stock you own. I believe this only happens in accounts that have margin (you do have to have a margin account to short), but I am not entirely certain.


generally from your broker (who borrows them from their other clients who own those shares). there's some institutional lending also


Apologies for my complete ignorance -- but doesnt that sound super shady?


Second all of these choices, and I will add Hyperion and its sequels to the list.


Location: Chicago Remote: Not necessary Willing to Relocate: yes Technologies: Proficient(Ruby, JS) Learning(Python, Java) Resume: www.linkedin.com/in/jakenations/ Email: jnations1214@gmail.com

Former finance professional turned web developer looking for a junior position. I am willing to relocate.


Droid Sans and Open Sans are two of my favorites.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: