Hacker Newsnew | past | comments | ask | show | jobs | submit | ritlo's commentslogin

I've seen another headline today suggesting the UK might drop to a 3-day workweek to conserve fuel.

Like damn, between reduced work-weeks and the prospect of wrecking our government-entwined spyvertising parasites, maybe the war was a good idea...


I never use them to replace a comma, certainly, and only rarely a colon.

I find parenthesis often awkward or too heavy, so may use the m-dash to replace those. Especially if what might have been a parenthetical is going to terminate a sentence, an m-dash is much cleaner, as it doesn't need a closing mark, and a terminating paren right before a period looks awful. For long potential-parentheticals that do terminate before the end of the sentence, the m-dash takes up more visual space and marks the beginning and end more-visibly, making for easier scanning. One ought probably re-write to avoid parenthetical statements most of the time in the first place, when there's time, but sometimes they're desirable for stylistic reasons, or just because one lacks the time to improve a draft.

I also use it as a "classier" version of the ellipsis. It doesn't replace every use, but it replaces very-casual, colloquial use of that mark as a kind of harder-comma. Looks much better, I think, and serves the same purpose.

As for the semicolon, I'd never shy away from the semicolon when I can get away with it, but use them rarely nonetheless. I don't think I ever replace them with the m-dash, though. As inline list separators they're great and an m-dash would be an awful replacement, while as soft-periods, they're fine, though most of the time I just use a full period—but not an m-dash, not if a semicolon could have worked.

I do think they're more at-home in, say, fiction than technical writing, but I like having them in my toolbox in any case.


Mine's tracking it complete with a leaderboard (LOL) and it's been suggested to me that it'd be in my best interest not to be too low on that list, so I suspect in the back half of the year some sterner conversations and/or pink-slips are going to be coming the way of those who've not caught on that they need to at least be sending some make-work crap to their LLMs every day, even if they immediately throw the output in the metaphorical garbage bin.

It's basically an even-more-ridiculous version of ranking programmers by lines-of-code/week.

What's especially comical is I've seen enormous gains in my (longish, at this point) career from learning other tools (e.g. expanding my familiarity with Unix or otherwise fairly common command line tools) and never, ever has anyone measured how much I'm using them, and never, ever has management become in any way involved in pushing them on me. It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week. WTF? That kind of thing should be leads' and seniors' business, to spread and encourage knowledge and appropriate tool use among themselves and with juniors, to the degree it should be anyone's business. Seems like yet another smell indicating that this whole LLM boom is built on shaky ground.


> It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week.

That's because they weren't sold regex as as service by a massive company, while also being reassured by everyone that any person not using at least one regular expression per line of code is effectively worthless and exposes their business to a threat of immediate obsolescence and destruction. They finally found a way to sell the same kind of FOMO to a majority of execs in the software industry.


What's stopping someone from just having the AI churn out garbage all day long? Or like, put your AI into plan mode with extra high reasoning and have it churn for 10 minutes to make a microscopic change in some source file. Repeat ad infinium.


> What's stopping someone from just having the AI churn out garbage all day long?

In my case it's morality.


Interesting consideration, 'mandates' and all. Definitely in camp 'toss the output', here. I think I'll see 'morality' leaving when $EMPLOYER fires 'professional discretion'... forcing usage and, ultimately, debasing the position.

edit: Peer said it well, IMO. The consequences aren't really yours. Also: something, something, Goodhart's Law.


I would argue that making the company experience the consequences of its choice of metrics / mandates is in fact a moral imperative.


> even if they immediately throw the output in the metaphorical garbage bin.

Gotta be careful if you do that tho; e.x. Copilot can monitor 'accept' rate, so at bare minimum you'd have to accept the changes than immediately back them out...


In a couple years, we'll have office workspaces equipped with EEG helmets that you must wear while working, to measure your sentiment upon seeing LLM-generated code. The worst performers get the boot, so you better be happy!


If you use AI to back it out, sounds like you’ve found an infinite feedback loop for those metrics.

Did industrial psychology die out as a field? Why do we keep reinventing the wheel when it comes to perverse incentives. It’s like working on a team working with scrum where the big bosses expect the average velocity to go up every sprint, forever, but the engineers are the ones deciding the point totals on tickets.


I wonder if Copilot can write a commit and backout routine for them.


From a management perspective I would be highly skeptics of token leaderboards. You are incentivizing people to piss away company money with uncertain rewards.

I mean… throw some docs into the context window, see it explode. Repeat that a few times with some multi-step workflows. Presto, hundreds of dollars in “AI” spending accomplishing nothing. In olden days we’d just burn the cash in a waste paper basket.


My company doesn’t enforce AI usage but for those who choose to use it, every month they highlight the biggest users. It’s always non-tech people who absolutely don’t understand how LLMs work and just run a single chat for as long as possible before our system cuts them off and forces them into a new chat context.


"Can't fix stupid"


Vibe code a side project at work. I’m willing to bet the tools aren’t mapping the code contribution locations to business impact (hard problem).


A related Dirty Secret that's going to become clear from all this is that a very large proportion of code in the wild (yes, even in 2026—maybe not in FAANG and friends, IDK, but across all code that is written for pay in the entire economy) has limited or no automated test coverage, and is often being written with only a limited recorded spec that's usually fleshed out only to the degree needed (very partial) as a given feature is being worked on.

What do the relatively hands-off "it can do whole features at a time" coding systems need to function without taking up a shitload of time in reviews? Great automated test coverage, and extensive specs.

I think we're going to find there's very little time-savings to be had for most real-world software projects from heavy application of LLMs, because the time will just go into tests that wouldn't otherwise have been written, and much more detailed specs that otherwise never would have been generated. I guess the bright-side take of this is that we may end up with better-tested and better-specified software? Though so very much of the industry is used to skipping those parts, and especially the less-capable (so far as software goes) orgs that really need the help and the relative amateurs and non-software-professionals that some hope will be able to become extremely productive with these tools, that I'm not sure we'll manage to drag processes & practices to where they need to be to get the most out of LLM coding tools anyway. Especially if the benefit to companies is "you will have better tests for... about the same amount of software as you'd have written without LLMs".

We may end up stuck at "it's very-aggressive autocomplete" as far as LLMs' useful role in them, for most projects, indefinitely.

On the plus side for "AI" companies, low-code solutions are still big business even though they usually fail to deliver the benefits the buyer hopes for, so there's likely a good deal of money to be made selling companies LLM solutions that end up not really being all that great.


Re. productivity, if LLM's are a genuine boost with 1/3 of the work, neutral 1/3 of the time, and actually worse 1/3 of the time, it's likely we aren't really seeing performance improvements as 1) people are using them for everything and b) we're still learning how to best use them.

So I expect over time we will see genuine performance improvements, but Amdahl's law dictates it won't be as much as some people and ceo's are expecting.


> better-specified software

Code is the most precise specification we have for interfacing with computers.


Sure, but if you define the code as the only spec, then it is usually a terrible spec, since the code itself specifies bugs too. And one of the benefits of having a spec (or tests) is that you have something against which to evaluate the program in order to decide if its behavior is correct or not.

Incidentally, I think in many scenarios, LLMs are pretty great at converting code to a spec and indeed spec to code (of equal quality to that of the input spec).


There are some cases where AI is generating binary machine code, albeit small amounts. What do we have when we don't have the code?


Machine code is still code, even if the representation is a bit less legible than the punch cards we used to use.


You’re missing the point of a spec


The spec is as much for humans as it is the machine, yes?


Spec should be made before hand and agreed on by stakeholders. It says what it should do. So it’s for whoever is implementing, modifying, and/or testing the code. And unfortunately devs have a tendency of poor documentation


Software development is only 70ish years old and somehow we have already forgotten the very very first thing we learned.

"Just get bulletproof specs that everyone agrees on" is why waterfall style software development doesn't work.

Now suddenly that LLMs are doing the coding, everyone believes that changes?


I’m confused, are you saying that making a design plan and high level spec before hand doesn’t work?

I've seen it happen. Things that seem reasonable on a spec paper, then you go to implement and you realize it's contradictory or utter nonsense.

I mean yeah it happens all the time but you need to start somewhere. But I worked in safety critical self driving firmware and rtl verification before that, so documentation was a necessity

Bingo. Hopefully there are some business opportunities for us in that truth.


> because the time will just go into tests that wouldn't otherwise have been written

Writing tests to ensure a program is correct is the same problem as writing a correct program.

Evaluating conformance is a different category of concern from ensuring correctness. Tests are about conformance not correctness.

Ensuring correct programs is like cleaning in the sense that you can only push dirt around, you can't get rid of it.

You can push uncertainty around and but you can't eliminate it.

This is the point of Gödel's theorem. Shannon's information theory observes similar aspects for fidelity in communication.

As Douglas Adams noted: ultimately you've got to know where your towel is.


A competent programmer proves the program he writes correct in his head. He can certainly make mistakes in that, but it’s very different from writing tests, because proofs abstract (or quantify) over all states and inputs, which tests cannot do.


> senior engineer would hate their jobs reviewing more code from their teammates

Jesus, yes. Maybe I'm an oddball but there's a limit to how much PR reviewing I could do per week and stay sane. It's not terribly high, either. I'd say like 5 hours per week max, and no more than one hour per half-workday, before my eyes glaze over and my reviews become useless.

Reviewing code is important and is part of the job but if you're asking me to spend far more of my time on it, and across (presumably) a wider set of projects or sections of projects so I've got more context-switching to figure out WTF I'm even looking at, yes, I would hate my job by the end of day 1 of that.


If we can't spend that much time reviewing code, what are we exactly doing with this AI stuff?

I don't disagree, I think reviewing is laborious, I just don't see how this causes any unintended consequences that aren't effectively baked into using an AI assistant.


Yes, this is part of why AI tools are bad

Code Review is hard and tiring, much moreso than writing it

I've never met anyone who would be okay reviewing code for their full time job


The only way to see the kinds of speed-up companies want from these things, right now, is to do way too little review. I think we're going to see a lot of failures in a lot of sectors where companies set goals for reduced hours on various things they do, based on what they expected from LLM speed-ups, and it will have turned out the only way to hit those goals was by spending way too little time reviewing LLM output.

They're torn between "we want to fire 80% of you" and "... but if we don't give up quality/reliability, LLMs only save a little time, not a ton, so we can only fire like 5% of you max".

(It's the same in writing, these things are only a huge speed-up if it's OK for the output to be low-quality, but good output using LLMs only saves a little time versus writing entirely by-hand—so far, anyway, of course these systems are changing by the day, but this specific limitation has remained true for about four years now, without much improvement)


So will it turn out that actually writing code was never the time sink in the first place?

That has always been my feeling. Once I really understand what I need to implement, the code is the easy part. Sure it takes some time, but it's not the majority. And for me, actually writing the code will often trigger some additional insight or awareness of edge cases that I hadn't considered.


"So will it turn out that actually writing code was never the time sink in the first place?"

Of course it wasn't! Do you think people can envision the right objects to produce all the time? Yeah.. we have a lot of Steve Jobs walking around lol.

As you say, there's 'other stuff' that happens naturally during the production process that add value.


At least with my experience at amazon it wasnt.

if i wanted, i could queue up weeks worth of review in a couple days, but that's not getting the whole team more productive.

Spending more time on documents and chatting proved much more useful for getting more output overall.

Even without LLMs ive been nearby and on teams where review burden from developers building away team code was already so high that youd need to bake an extra month into your estimates for getting somebody to actually look.


> actually writing the code will often trigger some additional insight or awareness of edge cases that I hadn't considered.

Thinking through making.


My prediction is a concorde-like incident is going to shatter trust and make people re-think their expectations of the capabilities of LLMs and their abilities of the present.

Essentially something big has to happen that affects the revenue/trust of a large provider of goods, stemming from LLM-use.

They wont go away entirely. But this idea that they can displace engineers at a high-rate will.


Assuming you mean this crash [0], it reads to me more like a confluence of bad events versus a big fundamental design flaw in the THERAC-25 mold.

I feel the current proliferation of LLMs is going to resemble asbestos problem: Cheap miracle thingy, overused in several places, with slow gradual regret and chronic harms/costs. Although I suppose the "undocumented nasty surprise" aspect would depend on adoption of local LLMs. If it's a monthly subscription to cloud-stuff, people are far less-likely to lose track of where the systems are and what they're doing.

[0] https://en.wikipedia.org/wiki/Air_France_Flight_4590


Like bombing a building full of little kids? Oops too late...


> Farmed animals could live happy, healthy lives and then be culled in a humane way.

> The problem is that it costs slightly more and our society is more concerned with cost than animal suffering.

IDK about other livestock, but this definitely doesn't hold for chickens, one of the cheaper meat sources in the US. Switching to breeds that could live more than a very-few weeks(!) before getting too overweight to walk, would increase price by far more than "slightly more", and there's no hope of anything fitting any sane definition of "humane chicken farming" without that step.

I suspect it's also true for pigs, not necessarily the "we bred them so wrong that their very existence is a crime against god and nature" part but that the price increase from a "healthy, happy life" would be a lot larger than "slightly more". Maybe also cows, dunno about that one.


It's not like chickens chose this way of life. Such breeds were developed for the specific purpose of meat, with no regard for their wellbeing. Don't shame the chickens for what human bastards do.


Oh, sure, our fault, but fact remains that modern meat-chicken breeds are so incredibly fucked up that it’s not really possible to humanely farm them. They’re like that because of what we did, yes, but step one toward comprehensive humane-farming for chickens would have to be “let those breeds entirely die out” regardless of who’s at fault (and it ain’t the chickens).


Agreed on the strollers. There are medical reasons sometimes or whatever I’m sure, but that doesn’t explain most of them. We pushed ours to walk outdoors as much as possible as soon as they could walk at all, otherwise you end up with the 6yo with an iPad in a stroller at the zoo, wtf. Can’t let them get used to anything you don’t want to keep doing for a looooong time.

> It's ok to send your kid to school/daycare with holes in their socks.

My kids wreck clothes. Others (like the people we buy them from, used) seem to fare better but each of my kids probably puts four holes in clothes per week, not even considering stains. Sure you can mend them but not when you discover the hole two minutes before you have to be out the door.

> Kids' clothes are forever dirty

Mine never, ever were as a kid—but I had one homemaker parent. You can do a way better job at this stuff while also feeling less-stressed and overall doing less total work under those circumstances. Between paid work and non-fun kid stuff / housekeeping my wife and I put in probably 70 hours a week, each, and don’t keep up as well as my mom did (granted, we have more kids too, but still). Coordination costs and having to chop the work into little bits between other things makes it way less efficient, and it can be hard to get to everything quickly. Things that go wrong Monday may not get addressed until the weekend, where my mom would have had it taken care of within an hour.


One of the worst habits distinctive to online discussion-board writing (especially the sorts of places with lots and lots of people and where it's fairly hard to get permanently kicked out—like here) is too much hedging and over-specifying to try to head off shitposting by bad or bad-faith readers. It's all over forum posts, and it's poor writing, but without moderation that slaps down responses based on plain mis-reading you have to write that way, or your post will spawn all kinds of really stupid tangent strings of posts (and they still do anyway, sometimes). And, yes, the excessive and too-close-together repetitiveness you mention is part of that.

The result is that a ton of web forum/social-media posting would, in any other context, be fairly poor writing (even if it's otherwise got no problems) simply because of the the extra crap and contortions required to minimize garbage posts by poor readers who are, themselves, allowed to post to the same medium.

This is in addition to, though not wholly separate from, the tendency toward combativeness in online posting.


I totally agree with this. I would add that it's well beyond the discussion boards. It's probably most clear there and it's well possible we learned it there and then took it into our social interactions everywhere, but the majority of my irl interactions—except with my closest friends—are sorta like this. Sometimes I think its ADHD, other times I think it could be any number of things, but I think to say anything that isn't dead simple (or in dead agreement with the other person), you need a few sentences. Often, you need to hear the third sentence before the first will make sense. But if you get distracted by the first one or can't suspend your disagreement enough to get to the third you will think the person is mistaken. You'll think that about both their first point and the larger one, which you didn't really hear or even get to but thought you did. So the speaker does the hedging each sentence in hopes of getting to the third (or whatever) sentence.


To add to this: another sign of posting on online boards is starting your comments with "I agree" because otherwise the other person might default to assuming you are disagreeing (as is the norm for replies), leading to a comment chain of people violently agreeing with each other without realizing it


>too much hedging and over-specifying to try to head off shitposting by bad or bad-faith readers.

yeah but if the OP doesn't do that and you confront their argument they can retreat into definitions and ambiguity without addressing your rebuttal. i think its good manners to be hyper-specific particularly on HN where there tend to be a lot of martian brained people who need it to engage with you. the fuzziness just won't do.


This is all communication no?

If people do not share the same context, then they will come up with different interpretations of the same content.

In communities with more homogenous understanding of the context, people are able to get into the details more effectively.

Those communities tend to also be impervious to outsiders, or newbies, because the use of terms/jargon that speeds up conversation abound.


No, this goes beyond that. A well-written article or book doesn’t need to be padded with junk to cater to bad readers, or to preempt trolls, because they can’t scrawl all over it such that it disrupts others’ experience. You have to go to e.g. the Amazon reviews to find people complaining that an author didn’t address something that they very definitely did, or claimed something they certainly did not, that stuff doesn’t show up on the page in footnotes or turn into flame wars on the page where everyone sees it.


Despite being a different kind of writing, there are some interesting parallels with the article in what you wrote here


Same set-up here. Had tried many times over something like 15 years to get Kodi/XBMC working well. Nobody else could/would ever use it (the UI is so bad) and I bet I spent about 50% as much time screwing around to set it up and maintain it as I ever spent watching stuff on it.

Jellyfin, at first with the official client on Roku then on Infuse on AppleTV when x265 hardware decoding started becoming a requirement (my server is too weak to transcode) has been everything I wanted Kodi to be. Web interface is great, I share it with a couple friends over Tailscale. Wife and kids and visitors use Infuse, no problem, no complaints, no help needed. My use-to-fiddling ratio is probably literally 100x better than with Kodi. I have spent overall less total time messing with it than with Kodi, even including figuring out solutions for things like YouTube videos.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: