Hacker Newsnew | past | comments | ask | show | jobs | submit | prisenco's commentslogin

Which is why I'm more comfortable using AI as an editor/reviewer than as a writer.

I'll write the code, it can help me explore options, find potential problems and suggest tests, but I'll write the code.


Copyleft removes legal obligation but we're free to confer a social obligation.

Could be speed/efficiency was the wrong dimension to optimize for and its leading the industry down a bad path.

An LLM helps most with surface area. It expands the breadth of possibilities a developer can operate on.


When building out a new app or site, start with the simplest solution like the html-only autofilters first, then add complex behavior later.

It's good to know these things exist so there are alternatives to reaching for a fat react component as the first step.


Until your client tells you that it doesn't work in Edge and you find out it's because every browser has its own styling and they are impossible to change enough to get the really long options to show up correctly.

Then you're stuck with a bugfix's allotment of time to implement an accessible, correctly themed combo box that you should have reached for in the first place, just like what you had to do last week with the native date pickers.


Right, don't add complexity until you have to.

I'd argue that adding complexity from the get-go to ensure that all users have a pleasant experience from the get-go is better than simplicity at the expense of some percentage of users.

I think it's important for web devs to spend more than two seconds to think if the complexity is necessary from the get-go though.


When building out a new app or site, which means a percentage of zero users is zero.

[deleted]


Have you no sense of craftsmanship?

It’s great to see practical examples that push us to consider what the platform already offers before adding more layers of complexity.

| self hosting costs you between 30 and 120 minutes per month

Can we honestly say that cloud services taking a half hour to two hours a month of someone's time on average is completely unheard of?


I handle our company's RDS instances, and probably spend closer to 2 hours a year than 2 hours a month over the last 8 years.

It's definitely expensive, but it's not time-consuming.


Of course. But people also have high uptime servers with long-running processes they barely touch.


Very much depends on what you're doing in the cloud, how many services you are using, and how frequently those services and your app needs updates.


I got the right answer but it was so easy I went in with doubt I had done it right.

Which I understand is my issue to work on, but if I were interviewing, I'd ask candidates to verbalize or write out their thought process to get a sense of who is overthinking or doubting themselves.


> I went in with doubt I had done it right.

And if in your doubt you decided to run it through the interpreter to get the "real" answer, whoops, you're rejected.


That's cheating (even if it just assures you that your answer is correct)


Is it? The page implies it's allowed, but they want people who think running it is "more of a hassle".


Oh right, it seems to be allowed.

I don't know then. I can open up a terminal with a python and paste it really fast, faster than run it in my head.


That doubt is valid. Anyone reading this blog post (or in an interview, given the prevalence of trick interview questions) would know there must be some kind of trick. So, after getting the answer without finding a trick, it would be totally reasonable to conclude you must have missed something. In this case, it turns out the trick was something that was INTENDED for you to miss if you solved the problem in your head. At the end of the day, the knowledge that "I may have missed something" is just part of day to day life as an engineer. You have to make your best effort and not get paralyzed by the possibility of failure. If you did it right, had a gut feeling that something was amiss, but submitted the right answer without too much hemming and hawing, I expect that's typical for a qualified engineer.


I'm a fan of anything that allows me to build with javascript that doesn't require a build step.

Modern HTML/CSS with Web Components and JSDoc is underrated. Not for everyone but should be more in the running for a modern frontend stack than it is.


On the one hand I can see the appeal of not having a build step. On the other, given how many different parts of the web dev pipeline require one, it seems very tricky to get all of your dependencies to be build-step-free. And with things like HMR the cost of a build step is much ameliorated.


I haven't run into any steps that require one, there's always alternatives.

Do you have anything specific in mind?


Anything that uses JSX syntax, for instance.

Any kind of downleveling, though that's less important these days most users only need polyfills, new syntax features like `using` are not widely used.

Minification, and bundling for web is still somewhat necessary. ESM is still tricky to use without assistance.

None of these are necessary. But if you use any of them you've already committed to having a build step, so adding in a typescript-erasure step isn't much extra work.


If there is one thing I don't miss using WebComponents is JSX. lit-html is much, much better.


It's such a lovely and simple stack.

No Lit Element or Lit or whatever it's branded now, no framework just vanilla web components, lit-html in a render() method, class properties for reactivity, JSDoc for opt-in typing, using it where it makes sense but not junking up the code base where it's not needed...

No build step, no bundles, most things stay in light dom, so just normal CSS, no source maps, transpiling or wasted hours with framework version churn...

Such a wonderful and relaxing way to do modern web development.


I love it. I've had a hard time convincing clients it's the best way to go but any side projects recently and going forward will always start with this frontend stack and no more until fully necessary.


This discussion made me happy to see more people enjoying the stack available in the browser. I think over time, what devs enjoy using is what becomes mainstream, React was the same fresh breeze in the past.


I recently used Preact and HTM for a small side project, for the JSX-like syntax without a build step.


I have not written a line of JavaScript that got shipped as-is in probably a decade. It always goes through Vite or Webpack. So the benefit of JS without a build step is of no benefit to me.


Dare to dream and be bold.

Seriously, start a project and use only the standards. You'll be surprised how good the experience can be.


Webcomponents are a pain in the ass to make, though. That is, sufficiently complex ones. I wish there was an easier way.


I've built Solarite, a library that's made vanilla web components a lot more productive IMHO. It allows minimal DOM updates when the data changes. And other nice features like nested styles and passing constructor arguments to sub-components via attributes.

https://github.com/Vorticode/solarite


It's ok now, at least for me. There are still challenges around theming and styling because of styling boundaries (which makes Web Components powerful, but still). A part of it is about tooling, which can be easier to improve.

Try my tiny web components lib if you want to keep JSX but not the rest of React: https://github.com/webjsx/magic-loop


I find Web Components aren't as much of a pain to write if you ignore the Shadow DOM. You don't need the Shadow DOM, it is optional. I don't think we are doing ourselves too many favors in how many Web Component tutorials start with or barrel straight into the Shadow DOM as if it was required.


They could have better ergonomics and I hope a successor that does comes out but they're really not that bad.


web components need 2 things to be great without external libraries (like lit-html):

- signals, which is currently Stage 1 https://github.com/tc39/proposal-signals

- And this proposal: https://github.com/WICG/webcomponents/issues/1069 which is basically lit-html in the browser


It's a shame Surplus (Adam Haile's, not my succession of it) isn't cited nor is he mentioned, given that at least two of the listed frameworks were heavily and directly inspired by his work. S.js is probably one of the most incredible JavaScript libraries I've used that should be the reference for a signal API, in my opinion.


Svelte has a pretty nice support for this via https://svelte.dev/docs/svelte/custom-elements

It's not a no-build option though.


Agreed on native HTML+CSS+JSDoc. An advantage in my use-cases is that built-in browser dev tools become fluid to use. View a network request, click to the initiator directly in your source code, add breakpoints and step without getting thrown into library internals, edit code and data in memory to verify assumptions & fixes, etc.

Especially helpful as applications become larger and a debugger becomes necessary to efficiently track down and fix problems.


This. Or use ts-blank-space if you prefer TypeScript over JSDoc. That's what we do in https://mastrojs.github.io


TS is worth the build step.


JSDoc is TypeScript.


It is TypeScript in the same way my rear end is the Grand Canyon: they are somewhat isomorphic but one is much less pleasant to look at.


I was already doing that in 2010, with the JSDoc tooling in Eclipse and Netbeans back then.

However I don't get to dictate fashion in developer stacks.


> Modern HTML/CSS with Web Components and JSDoc is underrated.

I've been a front end developer for 25 years. This is also my opinion.


You don't need a build step anymore with TypeScript since Node 24.


I'm referring to client-side javascript.


Why? The half a second for the HMR is taking up too much your day?


No, because layers of abstraction come at a cost and we have created a temple to the clouds piled with abstractions. Any option to simplify processes and remove abstractions should be taken or at least strongly considered.

Code written for a web browser 30 years ago will still run in a web browser today. But what guarantee does a build step have that the toolchain will still even exist 30 years from now?

And because modern HTML/CSS is powerful and improving at a rapid clip. I don't want to be stuck on non-standard frameworks when the rest of the world moves on to better and better standards.


> Code written for a web browser 30 years ago will still run in a web browser today.

Will it? - My browser doesn't have document.layers (Netscape) It seems to still have document.all (MSIE), but not sure it's 100% compatible to all the shenanigans from the pre-DOM times as it's now mapped to DOM elements.


The Space Jam website from 1996 still renders perfectly almost 30 years later.

https://www.spacejam.com/1996/

Those (document.layers and document.all) were both vendor-specific, neither were part of the w3c. I don't recommend ever writing vendor-specific code.

The w3c and standards have generally won so it's easier than ever to write to the standard.


Having all your code go through a multi-step process that spits out 30 different files makes it impossible to know what’s really happening, which I’m uncomfortable with.


Came here to write this exact sentiment. Not everything needs a massive build pipeline.


Everyone's suggestions feel designed to frustrate me. Instructions on how to cajole and plead that seem more astrology than engineering.

This is the pattern I settled on about a year ago. I use it as a rubber-duck / conversation partner for bigger picture issues. I'll run my code through it as a sanity "pre-check" before a pr review. And I mapped autocomplete to ctrl-; in vim so I only bring it up when I need it.

Otherwise, I write everything myself. AI written code never felt safe. It adds velocity but velocity early on always steals speed from the future. That's been the case for languages, for frameworks, for libraries, it's no different for AI.

In other words, you get better at using AI for programming by recognizing where its strengths lie and going all in on those strengths. Don't twist up in knots trying to get it to do decently what you can already do well yourself.


Same. AI is a good tool to use as a sounding board and conversation partner.

I only access claude and others using my browser - I give it a snippet of my code, tell it what exactly I want to do and what my general goal is, then ask it to give me approaches, and their pros and cons.

Even if someone wants to use AI to code for them, its still better to do the above as a first step imo. A sort of human in the loop system.

> It adds velocity but velocity early on always steals speed from the future. That's been the case for languages, for frameworks, for libraries, it's no different for AI.

Completely agree. I'm seeing this in my circle and workplace. My velocity might be a tad bit slower than the rest of my peers when you compare it per ticket. But my long tern output hasn't changed and interestingly, neither has anyone else's.

As an aside, I like your system of completely removing autocomplete unless you need it - may be something like that would finally get me to enable AI in my IDE.


This is the only approach that seems even remotely reasonable

“Prompt engineering” just seems dumb as hell. It’s literally just an imprecise nondeterministic programming language.

Before a couple years so, we all would have said that was a bad language and moved on.


A few years ago we didn't have an imprecise nondeterministic programming language that would allow your mom to achieve SOTA results on a wide range of NLP tasks by asking nicely, or I'm sure people would have taken it.

I think a lot of prompt engineering is voodoo, but it's not all baseless: a more formal way to look at it is aligning your task with the pre-training and post-training of the model.

The whole "it's a bad language" refrain feels half-baked when most of us use relatively high level languages on non-realtime OSes that obfuscate so much that they might as well be well worded prompts compared to how deterministic the underlying primitives they were built on are... at least until you zoom in too far.


I don't buy your past paragraph at all I am afraid. Coding langues, even high level ones, are built upon foundations of determism and they are concise and precise. A short way to describe very precisely, a bunch of rules and state.

Prompting is none of those things. It is a ball of math we can throw words into, and it approximates meaning and returns an output with randomness built in. That is incredible, truly, but it is not a programming language.


Eh, how modern technology works is not really the part I'm selling: that's just how it works.

Coding languages haven't been describing even a fraction of the rules and state they encapsulate since what? Punch cards?

It wasn't long until we started to rely on exponential number of layered abstractions to do anything useful with computers, and very quickly we traded precision and determinism for benefits like being concise and easier to reason about.

-

But also, the context here was someone calling prompting a "imprecise nondeterministic programming language": obviously their bone is the "imprecise nondeterministic" part, not distilling what defines a programming language.

I get it doesn't feel warm and fuzzy to the average engineer, but realistically we were hand engineering solutions with "precise deterministic programming languages", they were similarly probabilistic, and they performed worse.


Name a single programming language that is probabilistic in any way?


- A text prompt isn't probabilistic, the output is.

- https://labs.oracle.com/pls/apex/f?p=LABS:0:5033606075766:AP...

- https://en.wikipedia.org/wiki/Stan_(software)

- https://en.wikipedia.org/wiki/Probabilistic_programming

I explained in the most clear language possible why a fixation on the "programming language" part of the original comment is borderline non-sequitur. But if you're insistent on railroading the conversation regardless... at least try to be good at it, no?


I skimmed your comment since you were making the strange comparison that modern coding is basically probabilistic to a degree that prompting is, so I see now you weren't the one to say it's "probabilistic programming". But you are still trying to say that normal programming is basically probabilistic in some relevant way, which I think is quite ridiculous. I don't see how anything about normal engineering is probabilistic other than mistakes people make.


"I didn't do the absolute bare minimum and read the comment I replied to, so here's 100 words excusing that."


Do you mean, like, scripting languages? Are the underlying primitives C and machine language? "Might as well be well worded prompts" is the overstatement of the century; any given scripting language is far closer to those underlying layers than it is to using natural language with LLMs.


Sure doesn't seem like it. https://x.com/jarredsumner/status/1999317065237512224

And forget scripting languages, take a C program that writes a string to disk and reads it back.

How many times longer does it get the moment we have to ensure the string was actually committed to non-volatile NAND and actually read back? 5x? 10x?

Is it even doable if we have to support arbitrary consumer hardware?


You're stretching really hard here to try and rationalize your position

First of all, I pick the hardware I support and the operating systems. I can make those things requirements when they are required.

But when you boil down your argument, it's that because one thing may introduce non-determinism, then any degree of non-determinism is acceptable.

At that point we don't even need LLMs. We can just have the computer do random things.

It's just a rehash of the infinite monkeys with infinite type writers which is ridiculous


No the point was quite clear:

> A few years ago we didn't have an imprecise nondeterministic programming language that would allow your mom to achieve SOTA results on a wide range of NLP tasks by asking nicely, or I'm sure people would have taken it.

But that (accurate) point makes your point invalid, so you'd rather focus on the dressing.


We still don't have that programming language (although "SOTA" and "wide range of NLP tasks" are vague enough that you can probably move the goalposts into field goal range).


This comment is written way too adversarially for someone who doesn't know what NLP is.


> nondeterministic programming language that would allow your mom to achieve SOTA results

I actually think it's great for giving non-programmers the ability to program to solve basic problems. That's really cool and it's pretty darn good at it.

I would refute that you get SOTA results.

That has never been my personal experience. Given that we don't see a large increase in innovative companies spinning up now that this technology is a few years old, I doubt it's the experience of most users.

> The whole "it's a bad language" refrain feels half-baked when most of us use relatively high level languages on non-realtime OSes that obfuscate so much that they might as well be well worded prompts compared to how deterministic the underlying primitives they were built on are... at least until you zoom in too far.

Obfuscation and abstraction are not the same thing. The other core difference is the precision and the determinism both of which are lacking with LLMs.


It's technically deterministic, but it feels nondeterministic in chatbots since tokens are randomly sampled (temp > 0) and input is varied. Using the right prompt makes the model perform better on average, so it's not completely dumb.

I like task vectors and soft prompts because I think they show how prompt engineering is cool and useful.

https://arxiv.org/pdf/2310.15916

https://huggingface.co/docs/peft/conceptual_guides/prompting


> It's technically deterministic, but it feels nondeterministic in chatbots since tokens are randomly sampled

Are you not aware the random sampling makes something non-deterministic?


I'm saying LLMs are deterministic and because of that, prompt engineering can be effective. You knew what I was trying to say, but chose to ignore it.

You should follow the HN Guidelines. I'm trying to have a discussion, not a snarkfest.

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.


I don’t believe that when he wrote that, he was using his own intelligence


At least I'm adding to the discussion.


People who disagree with you add to the discussion too


You know what else is nondeterministic? "Project Management"

You tell humans to make code, they make code.

How do you check it's correct and does what it should?


Humans have agency and accountability, duty of care, etc.


How does a random programmer have "duty of care"?

They're paid to ship a feature, why would they spend extra time making something perfect? If it breaks they just fix it - and get paid doing it.

Not everyone writing code in a corporation is a Code Artesan who handcrafts every single character with thought and precision like an old Japanese craftsman. They just write code, it does about what it should and push a PR branch. Then you grab the next task from the corporate Jira project.


> How does a random programmer have "duty of care"?

Are there jobs that don't care if you don't take due care when writing code? I'd like me one of those.


AIstrology is a term I can get behind.

I think that really captures what I find most irksome about the fanaticism. It's not prompt engineering, it's a statistical 8-ball being shaken until useful output appears.

Just as with any pseudoscience, it can offer a glimmer of usefulness by framing problems in a different way. Just be cautious of who's offering that enlightenment and how much money you may be paying for it.


> how to cajole and plead

When a new person join the team you need to tell them the local coding standard. I don't see why people expect llm to work out of the box instead. The difference is you have to do it at every exchange as llm are stateless.

But yeah I mostly agree with the rest llm work best at very low lewel method by method where you can watch like an hawk they don't introduce silent failure condutions and super high level as system design as reasoning engine, and they still do not so good job at implementing components whole.


Both human and AI should be able to understand the "way we do things around here" by reading the existing code. I could spend an hour telling someone how to write idiomatic code, and they will forget all of it until they actually do some work and see the codebase.

When Claude reads a significant portion of my codebase into context it should be suggesting idiomatic changes. Even if it doesn't initially read a bunch of the codebase to figure out a solution, it should definitely then be trained to do so explicitly to understand the coding standards. Just like a decent dev would do on their own.


if you'd be right that code is so self evident project wouldn't have a coding.md or a contributing.md


> I don't see why people expect llm to work out of the box instead

because it's not a person? because you have to do it all the time? because of the way literally all other software works?


The coding standard is quality code and one should bring it with themselves comming into the company. And if you mean linter & formatting rules then if company is not young then their elders had a fist fight to settle once and for all one standard, zip it into a file and everybody just use it


I don't think anyone would say that the LLMs will produce better code, but they can do it much faster.

I personally did not hit the wall where the use of LLMs would slow me down in the long run.

It has been smooth sailing most of the time, and getting better with newer models.

For me it comes down to "know what you are being paid for".

I'm not a library maintainer. My code will not be scrutinized by thousands of peers. My customer will be happy with faster completion that does the same thing like the more perfect hand crafted version.

Welcome to the industrial revolution in programming. This is the way of things.


If you've seen Scrubs...


Also important is just how fast cheap hardware has gotten which means vertical scaling is extremely effective. People could get a lot farther with sqlite in wal mode on a single box with an nvme drive than they imagine. Feels like our intuition has not caught up with the material reality of current hardware.

And now that there are solid streaming backup systems, the only real issue is redundancy not scaling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: