Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I now have the pleasure of giving exercises to candidates where they are explicitly allowed to use any AI or autocomplete that they want, but it's one of those tricky real-world problems where you'll only get yourself into trouble if you only follow the model's suggestions. It really separates the builders from the bureaucrats far more effectively than seeing who can whiteboard or leetcode.


Its kind of a trap, we allow people in interviews to do the same and some of them waste so much time accepting wrong LLM completions and then changing them than if they'd just written the code themselves.


Ive been doing this inadvertently for years by making tasks that were as realistic as possible - explicitly based upon the code the candidate will be working upon.

As it happens, this meant when candidates started throwing AI at the task, instead of performing that magic it usually can when you make it build a todo app or solve some done-to-death irrelevant leetcode problem it flailed and left the candidate feeling embarrassed.

I really hope AI signals the death knell of fucking stupid interview problems like leetcode. Alas many companies are instead knee jerking and "banning" AI from interview use instead (even claude, hilariously).


> but it's one of those tricky real-world problems where you'll only get yourself into trouble if you only follow the model's suggestions.

What's the goal of this? What are you looking for?


I presume, people who can code, as opposed to people who can only prompt an LLM.

In the real world, you hit problems that the LLM doesn't know what to do with. When that happens, are you stuck, or can you write the code?


Id be seeing if the candidate actually understanding what the llm is spitting out and pushing back when it doesn't make sense vs are they one of the "infinite monkeys on infinite typewriters"


IF (and a big IF) LLMs are the future of coding, this doesn't mean humans don't do anything, but the role has changed from author to editor. Maybe you don't need to create the implementation but you sure better know how to read and understand it.


That's really interesting... can you give more details about the problem you are using?

This sounds like in there will be a race between this kind of booby trap tests and AIs learning them.


Long-tail problems are not reiterated in the dataset. Making LLM remember that can be difficult.


Some code challenge platforms allow for seeing how often someone pasted things in. That's been interesting.


Interesting, care to elaborate? Or this is a carefully guarded secret?


Not sharing what our coding questions are, but we also allow LLMs now. Interviewees choice to do so.

In quite a few interviews in the last year I have come away convinced that they would have performed far better if they had relied on their own knowledge/experience exclusively. Fumbling with windows/tabs, not quite reading what they are copying, if I ask why they chose something, some of them would fold immediately and opt for something way better or more sensible, implying they would have known what to do had they bothered to actually think for a moment.

I put down "no hire" for all of them of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: