For quite a while, I like use LLM to refine and fix my grammar issue, but my colleagues and professors reminds me that it was way too obvious. They said they can tolerate some mistakes in my words, but no tolerance for AI generated content.
Thanks for putting this so nicely! We'd much rather hear you in your own voice, and the cost of a few mistakes is far less than the cost of losing that.
It's worse than relinquishing: you get a new voice, that of the person needs an LLM to talk.
I have similar reservations about code formatters: maybe I just haven't worked with a code base with enough terrible formatting, but I'm sad when programmers loose the little voice they have. Linters: cool; style guidelines: fine. I'm cool with both, but the idea that we need to strip every character of junk DNA from a codebase seems excessive.
On code-formatters, I don't think it's so clear-cut, but rather an "it depends".
For code that is meant to be an expression of programmers, meant to be art, then yes code formatters should be an optional tool in the artist's quiver.
For code that is meant to be functional, one of the business goals is uniformity such that the programmers working on the code can be replaced like cogs, such that there is no individuality or voice. In that regard, yes, code-formatters are good and voice is bad.
Similarly, an artist painting art should be free. An "artist" painting the "BUS" lines on a road should not take liberties, they should make it have the exact proportions and color of all the other "BUS" markings.
You can easily see this in the choices of languages. Haskell and lisp were made to express thought and beauty, and so they allow abstractions and give formatting freedom by default.
Go was made to try and make Googlers as cog-like and replaceable as possible, to minimize programmer voice and crush creativity and soul wherever possible, so formatting is deeply embedded in the language tooling and you're discouraged from building any truly beautiful abstractions.
The biggest problem I ran into without a code formatter is that team wasted a LOT of time arguing about style. Every single MR would have nitpicking about how many spaces to indent here and there, where to put the braces, etc. etc. ad nauseam. I don't particularly like the style we are enforcing but I love how much more efficient our review process is.
Also your eyes are good at seeing patterns. If the formatting is all consistent the patterns they see will be higher level, long functions unintuitive names, missing check for return success; make bad good look bad is the idea. Carefully reading every line is good but getting hints of things to check more deeply because it looks wrong to the eyes is extremely useful.
Personally I think a lot of programmers care way too much about consistency. It just doesn't matter that much if two files use indentation / braces slightly differently. In many cases, it just doesn't matter that much.
Problem is, development doesn't operate on the level of "files". The incremental currency of developers is changes, not files -- and those changes can be both smaller and larger than files. Would you rather see different indentation/braces in different files so that the changeset you're reviewing is consistent, or rather see different indentation/braces in the changeset so that the files being changed remain internally consistent? And what about refactorings where parts of code are moved between files? Should the copied lines be altered so they match the style of the target file?
Point being, "different indentation in different files" is never a realistic way of talking about code style. One way or another, it's always about different styles in the same code unit.
Indeed, it doesn’t matter too much, as long as it is consistent.
People running their own formatting or changes re-adding spaces, sorting attributes in xml tags, etc. All leading to churn. By codifying the formatting rules the formatting will always be the same and diffs will contain only the essence.
The major reason auto-formatting became so dominant is source control. You haven't been through hell till you hit whitespace conflicts in a couple of hundred source files during a merge...
I worked on a project where having code formatting used was massively useful. The project had 10k source files, many of them having several thousand lines, everything was C++ and good chunks of code were written brilliantly and the rest was at least easy to understand.
I mean, not sure if this makes sense? The creativity you put into code is about what it does (+ documentation, comments), not about how it’s formatted. I could care less how a programmer formatted their website’s code unless it’s, like, an ioccc submission.
I've been editing my comments (not in English) with specialized spell-checking services, and I don't think they change my voice in any meaningful way. I suspect when people say they are using LLMs to fix their grammar, it's actually some more than just grammar.
There is quite a difference between fixing grammar and the fuller rewording that is often used especially by LLM based writing tools. The distinction is much more of a grey area when you not talking about a language you are fluent in, because you don't know the difference between idiomatic equivalences and full-on rewording that will change your perceived tone⁰ - the tool being used could be doing more than you think and not in a good way.
And if you are using the tool, “AI” or not to translate it is even worse and you often only have to do on cycle of [your primary language] -> [something else] -> [your primary language] to see what a mess that can make.
I'm attempting to learn Spanish¹ and when I'm writing something, or practising something that I might say, I'll write it entirely away from tech (I have even a proper chunky paper dictionary and grammar guide to help with that!) other than the text editor I'm typing in, and then I'll sometimes give a tool it to look over. If that tool suggests what looks like more than just “that's the wrong tense, you should have an accent there, etc.” I'll research the change rather than accepting it as-is.
--------
[0] or even, potentially, perceived meaning
[1] I like the place and want to spend more time down there when I can, I even like the idea of living there fairly permanently when I no longer have certain responsibilities tying me to the UK², and I'd hate to be ThatGuy™ who rocks up and expects everyone else to speak his language.
[2] and the shithole it has the potential to become over the next decade - to the Reform supporters and their ilk who say, without any hint of irony, “if you don't like it why don't you go somewhere else” I reply “I'm working on that”.
> Voice is everything. Don't relinquish the best part of yourself.
One observation I ran across on the use of the em-dash ("—") was that if AI was given training data from writers that were considered good/great, and those writers tended to use em-dashes, then it would be unsurprising that AI 'learned' to use the character.
So the observer said humans should, if they already did so in the past, continue to use the em-dash now and going forward if it was already part of their 'personal style' in writing.
I've written multiple books, the most recent in 2019. I used to love the em-dash, and considered it the superior form of ellipsis (over the parenthesis, comma or semicolon).
I'm not planning on writing new books now, but if I did, I would completely get rid of em-dashes, because of their second-order effect of making the copy AI-written (and therefore less valuable).
It's also interesting that using a Skill that discouraged the use of em-dashes, I noticed that Claude's "thinking" internal dialogue actually disagreed with the Skill spec itself ("no, actually, em-dashes are perfectly normal and not a sign of AI writing") and therefore kept the dashes, against the Skill instructions.
If that's true, it would be very sad indeed. Techical excellence is a very low bar to clear. It's so easy even AI can do that part.
When I was young, and learning my technical skills, then naturally I was focused on improving those skills. At that age I defined myself by what I did, and so my self worth was related to my skills. And while the skills are not hard to acquire, not many did, and they were well paid. All of which made me value them even more.
As I've grown older though I discovered my best parts had nothing to do with tech skills. My best parts (work wise) was in translating those skills into a viable business, hiring the right people, focusing my attention where it's needed (and getting out the way where it's not.) My best parts at work are my human relationships with colleagues, customers, prospects and so on.
Outside of work my technical skills mean nothing. My family and friends couldn't care less. They barely know I have drills at all, and no idea if I'm any good or not. In that space compassion, loyalty, reliability, kindness, generosity, helpfulness, positivity, contentment and so on are far (far) more important.
I hope at my funeral people remember those things. Whether I could set up email or drive an AI will (hopefully) not even be in the top 10.
I really love your post, but I do think (and I come from an artistic background) that some skills have their own beauty, like work of art. Some love for creativity and what we create has a meaning of its own. Certainly worthy of an epitaph.
It’s why overuse of AI is a bad call imo. You skip a part of the journey. Like Guy Kawasaki says “make something meaningful”. If we are all AIs talking to eachother, everything becomes meaningless, we will become a simulation of surrogates.
That said, human compassion, relating to others and everything you mentioned trumps everything else.
Sure thing, but at the same time, there's creativity and then there's work; I could creatively write things in C or assembly for the art of it, but that isn't what my employer pays me to do. I could do my job in notepad or `ed` and type every character myself, but that's inefficient.
Same goes for art (which is often what it's compared to), some part of art is creative, but the vast majority of art that people get paid salaries for is "just work"; designing a website, doing graphics work for a video game or TV production, that kinda thing.
tl;dr, AI won't replace artisans but it's a tool that can help increase productivity / reduce costs. Emphasis on can, because it's a lot more complex than "same output in less time".
This is quite an interesting question, because I believe there's two facets to the surface of the question.
Given you're interacting with a competent hacker (i.e. a person who is into tech not for money and for tinkering), you can't impress them. You can pique their interest, they may praise you, but if they are informed enough, anything looking like magic can be dissected easily. So technical excellence is meaningless.
Given you're interacting with a competent hacker again, everything technical will be subjective. Creating is deciding trade-offs all the way down and beyond. Their preferences will probably lay at a difference balance of trade-offs. Even though you catch "objective" perfection, even this perfection has nuances (see USB audio interfaces. They all have flat response curves, but they all sound different, for example), hence, technical excellence is not only meaningless, it's subjective.
On a deeper level, a genuine person who knows its cookies well, even though with gaps is a much more interesting and nicer person to interact with. They'll be genuinely interested in talking with you, and learn something from you, or show what they know gently, so both parties can grow together. They might not be knowledgeable in most intricate details, but they are genuinely human and open to improvement and into the conversation itself, not to prove themselves and win a meaningless battle to stroke their own ego.
An LLM generated response is similar. It's lazy, it's impersonated, it's like low quality canned food. A new user recently has written an LLM generated rebuttal to one of my comments. It's white-labeled gibberish, insincere word-skirmish. It's so off-putting that I don't see the point to reply them. They'll just paste it to a non-descript box and will add "write a rebuttal reply, press this point". This is not a discussion, this is a meaningless fight for internet points.
I prefer genuine opinions, imperfect replies, vulnerable humans at the other end of the wire. Not a box of numbers spitting out grammatically correct yet empty sentences.
> Given you're interacting with a competent hacker (i.e. a person who is into tech not for money and for tinkering), you can't impress them.
I disagree with this and would instead consider that a technical expert (in any field) being impressed with your work can be the most satisfying reward of craft.
Laypeople can be awed, but the expert can bestow an entirely different quality of respect to your work.
I agree with you that some people find this very rewarding, but this is not a given.
I for one, don't care whether anyone is impressed by my work. That's a nice bonus, but not a requirement. Instead, when I improve my work w.r.t. my previous one, the satisfaction I get is way bigger than an external validation. I seek my satisfaction inside myself.
That's completely true that I love discussing what I did with a competent technical expert, yet it's not why I'm doing this.
> That's completely true that I love discussing what I did with a competent technical expert, yet it's not why I'm doing this.
I agree with this sentiment completely. I do consider "the reason for craft" (which is a joy in itself) to be separate from the "bonus reward" of being able to discuss it with other craftsmen.
... and the latter often ends up surfacing even more challenging/interesting ideas to work on for both sides, which is a huge win.
More to the point, Hacker News is much more interesting for encouraging idiosyncratic (i.e. original, diverse, nuanced views of specific) human viewpoints, not just being raw technical information.
Model rewrites remove much of specific human dimension.
Great? If you're worried that somebody's actively trying to identify your HN comments against some other source of your writing perhaps. But using a LLM to "avoid deanonymization" is about as sensible for some everyday Joe, as wearing a tinfoil hat in public to avoid 5G radiation is.
Whether it makes sense for anybody to do it is the real question. The threat model where this is a useful thing to do doesn't really exist in my opinion, at least not for obfuscating random comments. Perhaps if you're doing some anonymous journalism that's uncomfortable for your country's regime, and you've previously written other stuff using your real name, it might make sense to run your writing through a LLM, maybe. In addition to a bunch of other Snowden-esque countermeasures.
Don't you think that as LLMs get better the deanonimization attacks will get easier?
Also, a journalist in a hostile regime might be one example, but a user that posted _very_ personal things under an alt account is also another example, and I bet the latter is much more common than the former.
Do you have enemies that would be interested in spending real money trying to link your public accounts to some (possibly existing, likely not) alt accounts with "personal things"? I don't think that's very common.
And no, while I'm sure LLMs can be used for stylometry in academic exercises, I don't think they'll really enable any sort of automatic mass-deanonymization of random social media accounts. But who knows, the US government probably has a bunch of new PRISM-like programs going on already, so it might happen.
There is value in technical excellence, but it’s not substituable for having and using a voice that isn’t the crowd-averaged AI normal. Better an unpracticed voice than a dull one, etc. (Also, AI is nullifying a great deal of excellence in favor of barely sufficient, just like Java did! so betting on the continued value of technical prowess requires some particular specializations that are not so easily replaced as the high quantity of devopseng cogs turn out to be.)
One example of voice is of retreading old ground over and over, taking a long time to give evidence or get to the point. Content expressed with this voice is hard to extract from the text.
Another voice might add citations to every little detail to the point that it is hard to read, but makes a great reference and/or starting point for additional research.
Voice is not really separate from content, in part it is the choices of what content to include.
Let me refer you to my buddy Anton, a software developer in Ukraine. He has CP and it makes typing and communicating by speech very slow and tedious.
https://www.youtube.com/shorts/aYbDLOK14uM
IMO his writing style is quite melodramatic. I have asked myself, how much of that is his perhaps overly compensatory tendency to project an articulate voice, and how much of it is applied by his AI tools?
The last time I saw Anton in person I asked him about his writing process, and he said something like, "I just draft it and then ask ChatGPT to make it sound professional or whatever." So after thinking about it for a while, I have decided that this is his preferred voice, so I'll accept it as his voice.
IMO it is not for you to decide how people recast their own voice. Once you adopt that dogma, you're committed to denying other people's experience of discrimination (through the lens of disability's symptoms). Whether or not you participate in that other type of biased discrimination is irrelevant.
This is weaponizing the situation of a single disabled person. The correct response is to make exceptions based on extreme circumstances, not to accept this behavior from everyone.
Too often, advocates try to smuggle in their preferred policy using stories like this as cover.
Coming from a social scene in which I'm involved in modding and deconstructing video games, this behavior was immediately apparent to me. It's the same contrived story that cheaters use to explain why they really really need a feature that gives them an advantage over other players in online games.
The story itself being true or not doesn't really matter - they're weaponizing an appeal to emotion by using a disabled person as a prop to violate everyone else's standards of interaction.
The overton window has shifted so much that we can call balls and strikes as we see them without creating too much reee'ing. As long as people stay civil, it's good.
This is not weaponizing to a single disabled person. I am not disabled, but I have always had difficulty expressing myself effectively, and that difficulty has increased as I've aged. I use AI to help organize my thoughts, to help give voice to that little tidbit of an idea that is trying to escape, and it has been a genuine help. Asking me to not use that assistance is similar to asking a user to not use accessibility features. It's an asinine policy and is an overcorrection.
Is this not the difference between using AI as an aid to organise yourself, as opposed to using AI as a total replacement for your thoughts or your writing and therefore removing the personal touch?
The bone of contention is that the signal:noise ratio on GPT's output is super low and there is no way to tell the difference between a thoughtful GPT post and slop, and given how easy it is to post at volume with low-effort AI posts, there is a bias towards caution rather than acceptance.
At best it's a case-by-case affordance to use AI as opposed to a blanket rule.
> as opposed to using AI as a total replacement for your thoughts or your writing and therefore removing the personal touch?
I'm really having trouble grasping the true breadth of this problem in the wild. How much of it am I not seeing because the mods filter it out first? How much is faulty signal detection from readers?
For all the challenges that AI poses to online communities, it does allow people for whom typing and dictation are painful, difficult, or impossible, to participate in those communities in ways they never could before.
I think HN is broadly supportive of these voices, and I think that an "unwritten exception" to this rule is implicit here. But I'm in the camp that making an explicit exception for special circumstances would be a meaningful statement that all voices are welcome.
>it does allow people for whom typing and dictation are painful, difficult, or impossible
Putting aside the example proposed above where typing or dictation may be difficult, "impossible" seems, well, impossible. I am curious how you suppose that someone who cannot type or dictate at all would prompt an LLM.
In a forum/community context, speed is vital! If it takes an order of magnitude more time to generate responses like yours and mine, one must choose which conversations one participates in much more carefully, and every such investment risks having the context of the conversation shift dramatically while drafting a response - to the point that one might be considered rude or disconnected. That makes participation essentially impossible.
Someone with a slower rate of both reading and creating text would benefit less from LLM assistance, to be sure. But someone who can read quickly, but may only be able to generate/select a few bits of entropy per second due to physical limitations? (Human speech is widely cited at a median of 39 bits per second.) They’d benefit massively from a system that could generate proposed responses that could be chosen from and refined.
In other words, if you’re the oracle, and the machine asks multiple choice questions until it is certain it speaks with your voice - is there a better set of such questions than just letter-by-letter a-z, a-z, a-z? Does that imply the content is AI-edited? Or is it an accessibility tool?
Without negating your point I want to add that at some threshold of tediousness, usability issues become accessibility issues. The fact that this threshold varies from individual to individual makes heuristic guidelines difficult.
What about the people who struggle to form coherent prose for mental or physical reasons? The content should be judged for what it contains, not how it was made.
You're getting into the long tail of cases there, which can't be generalized about. We'd need to know about a specific situation in order to say anything.
Is it a long tail? Let's take me, because I know the subject well.
I have poor working memory. Very poor, insomuch as I have to type six digit codes in blocks of three.
I can write, of course, and sometimes well. But technical writing requires maintaining both detail and thread and I cannot do both in a sustained way. For a short comment, I'm usually okay. For anything longer, not so much.
Is the long tail the whole beast? I think yes.
So I write shorthand and use tools to help me, and yes the results aren't always perfect -- but they are my thoughts embodied.
Eh, history has shown me that that's incorrect, though. In my culture, we're direct and just say what we want to say, whereas in US culture you have to be very circumspect or you get a bunch of downvotes. I've used an LLM to give me feedback so I can "anglicize" my comments, otherwise I get downvoted to hell.
Even in this comment, I initially wrote the start as "you're wrong", but then had to catch myself and go back and soften it to "that's incorrect", even though the meaning is the exact same. The constant impedance mismatch is tiring.
When it comes to factual information, and not opinion - telling someone that they are wrong is not a criticism.
It is fact.
Of course - people have egos and emotions, so when they hear someone tell them they are wrong, they will typically take that as criticism about themselves - and not the fact that you are disputing.
That doesn't refute the comment - "you are wrong" is personal and aimed at the person, "that is not correct" is impersonal and directed at the contents.
This is the complexity of language and communication, but in this case it's pretty clear. "You are wrong" is criticism on and aimed at the person.
Yeah, I don't see it this way. I see it as that "you're always wrong" is criticism and aimed at the person, "you're wrong" (clearly implying "on this") is directed at the contents.
I will agree with you that a short response simply stating that "you are wrong" is aimed at the person - if it isn't supported with the facts, resources and details about why they are wrong.
However - if those details are provided, it is not personal, but just simply factual and shouldn't be considered an insult.
The other complexity is whether or not one is having a debate about something that can be factually quantified, versus something that is just an opinion.
HN, its moderation guidelines, and its moderator practices, are highly sensitive to anything verging on personal attack simply because site behaviour is so sensitive to such writing.
If that means blunting objections as "that's incorrect" rather than "you're wrong", so be it. Two decades' experience, which is a tremendous run in online forum space, is quite difficult to argue with.
(Not that I don't occasionally argue with mods over guidelines, intent, and/or effects, not necessarily on this specific rule.)
If it is rainy near me, and clear skies near you, and I tell you the sky is grey, without corroboration from the weather report, I am wrong to you. If you say the sky is blue, without corroboration, you are wrong to me.
Gravity falls down. On Earth.
The boiling point is 100 degrees. Unless you're using Fahrenheit or Kelvin.
I find that when refuting people, instead of outright debasing their position with a right/wrong dichotomy, it works better to illuminate the possibility there is a larger breadth to the viewpoint. In this way, both views can generally share the same space. Healthily, if one can add such a descriptor.
>> I find that when refuting people, instead of outright debasing their position with a right/wrong dichotomy, it works better to illuminate the possibility there is a larger breadth to the viewpoint. In this way, both views can generally share the same space. Healthily, if one can add such a descriptor.
This can be exhausting. When arguing product characteristics at work, I'm often tempted to say "that's terrible" or "nobody wants that". In my mind those would be factually correct based on my experience and understanding. But I still have to bite my tongue and remember the specific reasons those are bad ideas and "make a case". It is always received better with supporting information rather than presented as a fact. It helps me if I think of it as persuasion or education which is worth the extra time.
It's completely clear what is intended, the only thing you're disagreeing about, is the cultural difference of who is expected to make this translation.
I think that would've been pretty clear from the post too, if you weren't so keen on giving a non-native speaker an English lesson ...
Trying to keep things on topic, BTW, I found that LLMs are pretty good at picking up the kinds of context that makes this very obvious what is really being meant.
So you could use an LLM, privately, to soften people's opinions.
I just tried it for you, I won't copy it here cause the thread is about not using LLMs, but if you get too upset from somebody being simply direct and clear in their manner of speaking, the LLM is trained on enough American cultural baggage that it is very capable of softening that blow with the extra words you so dearly need to see past that red mist.
Someone might even be able to vibe code a browser plugin for it.
It depends on whether what they say is coming from them or if it's something they are citing; "I am extremely attractive" can be countered with "you are wrong", but "People say I am extremely attractive" cannot be, because I did not come up with the opinion, others did.
"They are wrong" is then valid, or "That is not correct" if I have misinterpreted them.
I doubt it’s your tone that gets many downvotes, although it’s true if you soften your opinion you’ll get fewer downvotes. But clearly stating a bad opinion is usually the best way to get downvoted.
At the margin this is fine. But ensuring that we really understand each other is the most important thing. Especially these days, when polarization is so intense and everyone seems to actively look for faults in what others (seem to) say.
When it's a matter of a spelling error or two, no problem. But too often I find I've got to read something multiple times before I have any idea what my interlocutor is saying.
Is our hatred of "AI Slop" and greater posting traffic worth handicapping our ability to communicate with each other?
Using entirely LLM-drafted writing often reduces the amount of effective information conveyed even if the output is perfectly formatted, fluent English.
When I receive an LLM written email at work, I start to question every specific detail because I have no idea if it actually came from the writer (and is therefore important), or was inserted as filler by a computer (and therefore irrelevant).
It wouldn’t be as much of a problem if everyone carefully edited the LLM output themselves before sending (although voice, tone, emotional context clues would still be elided).
But in practice that doesn’t happen, it’s just too easy to click send and the time burden gets passed to the other person.
I tell people that when editing posts on my blog, I rely on AI to fix my code blocks if there are errors but I don't use it to fix typos or grammar. I feel like that keeps my blog human.
I routinely call out people of writing in an LLM assisted fashion that clearly shows they have just been "vibe commenting". You know, just paste it in and copy the output without even thinking. The people who for some insane reason think they are making a genuine conversation with their copy pasting skills and $20/mo subscription. As if they are like the archive.whatever of the AI era. Because those comments are objectively terrible and contribute little. The ones with all the consultant sycophant speak and distracting prose that comes off the default prompt and RLHF.
But that's really what you're now enforcing: writing in an easily detectable LLM prose and voice. LLM detection is very difficult especially at small comment scale texts. There is never proof, only telltale phrases. How will this be enforced? What the heck even is "AI"?
The thing that really frustrates me is that I can't put tokens through a transformer in any way in editing my post? I can't have an LLM turn a bare link after a sentence into a [1]? I can't have it literally do nothing more than spell check in an LLM, but could with a rule based model? Or what about other LLMs or SLMs or classic NLP chained together? Or is it just the transformer?
And it is officially sanctioned that people ought to be keeping in the back of their mind "does this feel LLMish?" instead of "is this a good comment that contributes to the discussion?" Maybe LLM prose is so annoying and insufferably sycophantic that even if all the content and logic was sound, it still should be moderated completely out. But the entire technological form is profane and unclean?
I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use. I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.
Definitely agree. If you look at comments posted in places like Slashdot - is is basically ruined forever (and at one time it was quite excellent for real comments, from real experts and experienced people)
>But that's really what you're now enforcing: writing in an easily detectable LLM prose and voice.
That's a good start already. Don't let the impossibility of the perfect prevent implementing the good.
>I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.
Nope, it's all bad. If I wanted the comments of an LLM, I'd ask an LLM.
>I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use.
>I want my comments judged by the contributions they make and do not make to the discussion
There used to be a sort of gentleman's agreement that I could spare the time to read and judge your comment because you went through the effort of writing it.
I think a more generous interpretation of dang's comment is that it's fine to use LLMs / tools to fix grammatical errors / spellchecking, but a heavier pass where the prose, wording and tone is altered (even mildly) can create a 'slop ambience' over time, death by a thousand paper cuts.
There's a gradient here for sure, but it's getting clear that people using LLMs "only" for grammar and spelling fixes are underestimating how much else the LLMs are doing.
Slop ambience just sure sounds to me like HN is banning a prose style. I guess I just think that if this is how the rule will be enforced, that is how it should be written.
HN already does a decent amount of content-policing, which helps keep the discussion higher quality. I don't see a huge diversion here from the usual moderation.
Home can be sure the LLM is modifying just the prose style? Moreover, prose style is one of signal that convey information about what you are trying to transmit (unlike code, which is totally debatable if it should have meaning on its own).
As a non native speaker, I sometimes use LLMs to search for a way to formulate my thoughts like I intend them to be received by the reader. I'd never just copy the verbatim LLM output somewhere, it always sounds blunt and not like me, but I gladly apply grammar corrections or better phrasing.
I'd normally not do this for a text of this length, but just for fun, here's what ChatGPT suggests:
As a non-native speaker, I sometimes use LLMs to help me find wording that conveys my thoughts the way I want them to be understood by the reader. I would never copy the output verbatim, because it often sounds blunt and unlike me, but I’m happy to use grammar corrections or improved phrasing.
I disagree. To my ears, "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" conveys the same meaning as "to search for a way to formulate my thoughts like I intend them to be received by the reader", only less convoluted and more precise: for example "understood" vs "received" - the former is more specific, the latter more general and fuzzy. The effect is to make the phrasing easier to read and understand.
Introducing "because" also adds to the clarity without weighing down things or changing the meaning. "Improved" instead of the bland "better" again is an... improvement.
I imagine GP didn't sneak in the tendentious "to fit with and be well received in the hacker news community" in his instructions.
Overall this was a worthwhile assist. I believe (totally understandable) anti-AI animus is coloring a lot of these replies. These tools can be useful when applied sparingly and targeted la GP did. It's true and very unfortunate that often they are used as the proverbial hammer in search of a nail, flattening everything in the process.
> Overall this was a worthwhile assist. I believe (totally understandable) anti-AI animus is coloring a lot of these replies.
That, and hindsight bias. People know the second version came from an LLM, so it's automatically "flat." But if that edited comment had just been posted, nobody would've blinked. It reads fine.
IMO, there's a distinction worth drawing here: "AI edited" and "AI generated" are not the same thing. If you write something to express your own thinking, then use an LLM to tighten the phrasing or catch grammar issues, that's just editing. You're still the one with the ideas and the intent. The LLM is a tool, not an author.
The real failure mode is obvious enough: people who dump raw model prose into threads without critical review. The only one who "delved into things" was the model - not the human pressing send. That does flatten everything. But that’s a different case from a non-native speaker using a tool to express their own point more clearly.
The "preserve your voice" argument also smuggles in a premise I don't necessarily share - that everyone should care about preserving their voice. I'm neurodivergent. Being misunderstood when I know I've been clear is one of the most frustrating experiences there is. For some of us, being understood sometimes matters more than sounding like ourselves.
> But if that edited comment had just been posted, nobody would've blinked. It reads fine.
That's definitely fair here; I still think the human version is better in contrast, but there's nothing wrong with the AI version, and had it been posted without the comparison, there would have been no issue.
Preserve your voice is not really about preserving your identity and I think I only remember a few commenters. Humans hve a certain cadence to writing (even after editing) that LLMs strip away. The way LLM write feels unnatural. Perfect grammar, but weird rythms of ideas.
Any single LLM-edited comment reads fine in isolation. The uncanny valley kicks in when you read thirty of them in a row and they all use the same "it's not X, it's Y" construction. The problem isn't that LLM prose sounds inhuman but that it sounds like one human writing everything. Homogeneity at scale becomes an uncanny valley.
This happens because most people just paste a draft and say "make this better" with zero style direction. The model defaults to its own median register, and that register gets very recognizable after you've seen it a hundred times.
But this is a usage problem, not a fundamental one. I actually ran an experiment on this — fed Claude Code a massive export of my own Reddit comments, thousands of them across different subreddits, and had it build a style guide based on how I actually write and argue. The output was genuinely good. It sounded like me, not like Claude. The typical Claude-isms were just about gone.
I wouldn't expect most people to do that. But even a small prompt adjustment makes a real difference. Compare "improve this email" to something like:
Your job is to proofread and edit the following email draft.
Don't make it longer, more formal, or more "polished" than it needs to be.
Fix anything that's actually wrong (grammar that changes meaning, tone misreads).
Leave stylistic roughness alone if it fits the voice.
If the draft is already fine, say so.
That preserves voice way more than the default "Hello computer, pls help me write good" workflow.
But if we're being honest, most people don't care about preserving their voice. They need to email their professor or write a letter to their bank, and they don't want to be misunderstood or feel stupid.
There are many topics which I know I am not qualified to comment on. I don't understand, for example, the different ways to handle pointers in C++; if someone shows me two snippets of code handling them in different ways, I can't meaningfully distinguish between them. My takeaway from this is 'I shouldn't give advice about C++ pointers', rather than 'there are no meaningful differences in syntax'. I am not qualified to contribute on that topic, and I should spend time improving my understanding before I start hectoring.
Your comment is one of many on this post that assumes that--because you personally have not noticed a difference--one must not exist. This is not a reasonable assumption.
To take one small example, there is a distinction between 'understood by the reader' and 'received by the reader'. One of them is primarily focused on semantic transmission (did the reader get the message?) and one of them encompasses a wider set of aims (did the reader get the message, and the context, and the connotations, & how did it impact them?).
Every phrasing choice carries precise meanings. There are essentially no perfect synonyms.
In this specific comment, I want you to understand that there are gradations you might not be qualified to detect/comment on. In terms of reception, I'm hoping you will see this as a genuine attempt to communicate, rather than an attack, but I also want you to be aware of the (now voiced) implication that 'I don't see this so it isn't real', no matter how verbose, is a low-effort contribution that doesn't actually add anything.
I'm reminded of Chesterton's fence [1]: if you can't see a reason for something, study it rather than dismissing it.
Sorry, but now you just sound straight-up pompous.
Starting with that absurd first paragraph offering proof for the otherwise inconceivable idea that there are are indeed topics that you aren't qualified to comment on - on one hand, and on the other insinuating that you surely must be more qualified than me to comment on semantics; continuing with the second, totally uncalled for given that I prefaced my comment with "to my ears", yet you didn't; the third, again redundant since I already mentioned that "received" is more general than "understood", so of course the meaning is different - that's the whole point, using a tool to find more fitting meanings, if they would be the same what would be the point?? The assumption is whoever uses the tool keeps the one they feel comes closer to what they had in mind, discarding the rest, no?
Let's stick to this particular example. Why is "understood" a better fit in that context (beyond the original comment suggesting it was closer to their intended meaning)? Because that's as much as we can hope for - to convey the desired understanding. (And yes, that includes connotations and the like, at least if you want to stick to a reasonable, not tendentiously restricted understanding of the word.) How the meaning is received depends indeed on other context, like maturity and generally life experience. For example, you were probably hoping that your message would be received with awe and newfound respect on my part for your wit and depth of insight. But instead, I found you comment merely tedious and vacuous. Consequently, I don't plan to check back on whatever you might scribble in response.
So in this case, you're able to detect how phrasing communicates shades of meaning, when you were not able to in the parent message. That's the whole crux of the discussion.
Regardless of how I feel you've misread my message, the fact remains that the way in which a message is expressed does change the import of the message, and that 'received' is not the same as 'understood'; you can't simply swap out parts without changing communication, and the way in which a message is expressed will--intentionally or otherwise--have an impact on the reader.
That's what people are calling out when they talk about the tone or voice of AI-generated text; it's something that many people notice and have a strong negative reaction to. You might not have that same reaction to the stimulus as other people, but that's beside the point: a lot of other people do, and they're also recipients of the communication.
Just as it is useless for me to point out all the places where I think you have misinterpreted my message in a rush to offence, asserting that there isn't a difference because you personally cannot detect one is not justified.
> my ears, "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" conveys the same meaning as "to search for a way to formulate my thoughts like I intend them to be received by the reader"
I disagree with your disagreement and subjective take. The LLM changed the meaning in a significant but not very obvious way.
Compare "I use a hammer to drive nails" to "I use a hammer to help me drive nails"
In the former the writer implies tool use, in the latter the LLM turned that into some sort of assistant relationship. The former is normal, the latter is cringe (to my ears)
There is also significant meaning encoded in the parent's choice of words that implies more than what's written. "Formulate", "intend", and "receive" imply the parent comes from a technical or academic background, and this is how they express their thoughts. Parent has "intentions", not mere "wants". To the parent, the act of weaving together a comment for communication constitutes "Formulating thought", which is different from just "find wording"
it also substantially changed the meaning by substituting 'always' to 'often'. and it's this sort of nuance that makes it very hard to trust for precise communication.
How do you know what the text would have been without LLM assist? Did I miss something? You are so confident in your claims, yet I don't see the non-LLM-assisted version.
Probably. Planb’s message suggest that the first paragraph is their own writing, the second paragraph tells us that the third paragraph is the llm “improved” version of the first.
This little experiment of yours highlights the issue at hand quite well. In every language there is a thing called "voice": academic, formal, informal, intimate, etc. The rewritten paragraph sounds written in the notorious "LLM voice". It's less direct, more pandering and removes injection points for further discussion.
To continue the experiment I have fed the above paragraph to Gemini with this prompt "Fix grammar and wording issues in the following paragraphs, if needed reword to fit with and be well received in the hacker news community."
This experiment highlights the core issue. Every language has its own voice—academic, formal, informal, or intimate. Your rewritten paragraph leans into the notorious "LLM voice": it’s less direct, feels slightly pandering, and strips away the hooks that usually spark further discussion.
> The rewritten paragraph sounds written in the notorious "LLM voice". It's less direct, more pandering and removes injection points for further discussion.
Does it? I don't see it. If anything, it is more direct and clear, not less, i.e. "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" instead of the more convoluted "to search for a way to formulate my thoughts like I intend them to be received by the reader". How is it pandering? And how exactly does it remove "injection points"?
It basically chose more precise words where that was possible, resulting in a net improvement, AFAICS.
The task of helping to find wording that conveys your thoughts could mean several methods. It could mean you one-shot reword prompts and that helps you find wording. Or it could mean you're taking its output more substantially. Or you're going back and forth where the LLM is suggesting and you're suggesting too. It's incredibly vague what portion of "helping" the LLM is doing!
Whereas "search" implies (to me) a kind of direct and analytical process of listing and throwing out brainstormed suggestions, like you would with a search engine.
When I read the human version I actually get a sense of what that process looks like, and the LLM response definitely clouds or changes it by focusing on the result instead.
As a non native speaker, I can even sense the little differences between these two.
I have answered something similar before, I struggle on sending messages as I want them to be received, with AI it is even harder, the "taste" of my thoughts, how I like to express, the habits of the phrasing or wording, get lost completely.
I am in agreement with you, but regret that you missed an opportunity to swap two paragraphs around and purposefully mislabel them (i.e. the LLM-generated as your own, and vice versa). I'd be very curious if audience here would successfully pick it up!
If you're referring to speaking in English - in general I think there is a huge amount of flexibility for making mistakes in English. I'm a native speaker, I am so used to hearing various levels of English from different nationalities that i'm almost blind to it. I much prefer to hear someones true voice even if there are a few inaccuracies, so much of a person's personality is conveyed through their quirks and mistakes.
Huh. I have the opposite opinion. I'm monolingual English for all intents and purposes but I gathered that opinion from quite a few sources, including:
- We had to take spelling tests in school
- English speakers make (generally light) fun of other's spelling or grammar mistakes in a casual setting
- In a professional setting, a lot of time is taken to proofread our own emails
- There's de jure spellings for every word
- Some online communities are really weird about pointing out grammar and spelling mistakes (namely Reddit)
Language is meant to be a fluid, evolving thing but I always felt like English was treated the opposite way. Maybe that's also why it's the de facto Lingua Franca.
I do think, and hope, that this rigidity will change thanks to AI. I've started to embrace my mistakes. I care a lot less about capitalization and punctuation in my Slack messages, for example.
I agree with this, and I’d even say that all the grammatical and spelling mistakes, awkward constructions, and labored phrasing is what makes a person’s posts sound like themselves. If people commonly use LLMs to rewrite themselves, then everyone starts sounding the same. And the posts, the users, and the entire site all become a lot less interesting.
I'm absolutely with both of you, but I'd like to point out that non-native speakers often tread a very fine line. They need to fear sounding either too convoluted or a bit of a simpleton. Proficiency levels vary wildly, and not everybody in the audience is as receptive and welcoming to slight mistakes as you are, even tough I have to admit HN in particular is pretty tolerant.
I for one don't think I'll ever AI-wash my texts or use AI translations verbatim. If everybody else did, it would certainly be a sad loss of diversity, but IMO it's only going to make the people who put in their own effort stand out more. Hopefully in a positive way. Time will tell if we're a dying breed.
I'm afraid the need for anybody to learn foreign languages will be subject to much change and discussion for upcoming generations.
> ... in experiments in which all outer sensation is withdrawn, the subject begins a furious fill-in or completion of senses that is sheer hallucination. So the hotting-up of one sense tends to effect hypnosis, and the cooling of all senses tends to result in hallucination.
Must quote the last paragraph of Chapter 2: "Hot and Cold media", from Marshall McLuhan's Understanding Media, which I've double-underlined.
For it simultaneously explains to me; TikTok (quick consume-scroll-like-react-"create" dopamine hit cycles) and LLMs (outsourcing the essential mechanical friction of thinking (which requires all senses, for me at least))...
The essential friction of deliberate, first-party speech-making---misspellings and all---is why voice and conversation contains life.
Even if you make mistakes, it often can still be understood. 100% I would rather read your own words, even if they're messy, and ask clarifying questions for what I don't understand
LLMs work better as translators than any non-AI translators though. Because they are able to translate not just words, but also capture the context of what's being said. If you translate a common phrase like "home, sweet home" to another language, it may or may not make any sense if you translate it word-by-word, like traditional translators would normally do... but LLMs know "what you mean" and will use the equivalent saying in the target language, even if that use entirely different words.
I dunno? I think modern translators get idioms nowadays don’t they? If not, they should.
how hard is it to recognize common idioms and at least state the literal meaning followed by the semantic meaning? there are at most what, a few thousand per language?
I think he meant non-human translators, like Google translate etc. Those translations were indeed not making any sense sometimes. Although I have heard that they improved Google translate in the recent years.
This appears to be leading to people being super quiet about their AI usage. It really feels as if everyone is using it massively but keeping quiet about it. This is a guess as I haven't gone around and asked every single person about their AI usage.
I am reminded about a question I posted in a Vintage Apple subreddit. I described the problem and all the steps I took to try and resolve it. In the middle of the text I also hinted that I asked AI and that it gave be a wildly strange answer which I dismissed but that it gave me hints to continue onwards.
The majority of answers were focused around that one sentence and completely ignoring the rest of the post(and even the problem I was posting about). I was ridiculed (sometimes aggressively) for even considering trying the AI. Eventually someone finally answered the question, I thanked them and continued to get downvoted massively.
While I get that the vintage community can attract some colorful characters this was an interesting observation at how badly they reacted to the post. I've since refrained from mentioning AI and furthermore, trying to limit my involvement with communities like that and ironically working on better ways to use AI to solve problems so as to minimize dealing with them(finding ways of providing more system level data to the AI in my prompt).
That, or he has been writing LLM-style all this time but with bad grammar.
Also to the people saying that they just let LLM replace phrases: that's the worst you can do. LLM style lies mostly in the phrases, they come from a narrow selection that they tend to use
It's interesting you say this, and I wonder how far it gets. I like speaking at conferences and often submit proposals to their CFPs. I sometimes have the temptation to refine my abstracts using AI; not fully generate them, just touched them. But then they don't feel like me and I have a dilemma: shall I submit the 100% mine but perhaps sub-optimal text? or the AI-enhanced one? will the AI-edited one be too obvious and be rejected as AI slop?
However, this isn't an entirely new phenomenon. There is a company in Spain called Audens that manufactures croquettes. People prefer hand-made croquettes instead of industrially produced, and they usually can tell the difference by how perfectly regular industrial croquettes are, so Audens developed this method to produce irregular croquettes. Each individual croquette is slightly different, creating a homemade feel that appeals to consumers.
No, but a lot of AI-adjsuted wordings have the very idiosyncratic AI-style that is prevalent in the AI-slop that is everywhere, and that style has quickly become associated with writing that is generally void of content and insight. So it is natural to get gut-reactions to the typical phrasings that have become associated with AI.
(3) In order to ensure account security, identify and prevent malicious programs, and create a fair, healthy and safe environment, we will collect your device identifier information, product identification information, hardware and operating system information, installed application list, application process and product crash record information during your use of the service, including during the background operation of the application, so as to combat acts that damage the product environment or interfere with the normal operation of the product service.(Used to detect piracy, scan cheating programs or software, prevent cheating).
I remember using AdGuard DNS to block advertising on my Samsung TV. The television kept retrying to connect to their remote tracking servers, and drained 300K monthly requests in just two weeks.
Incorrect. Chinese mobile carriers only issue eSIM to their approved models, which are devices sold in China. Once the eSIM is activated, users can roaming with their Chinese phone number to any country just like a physical SIM card.
Also, iPhone and iPad sold in China can install and activate an eSIM from foreign carriers when the device is not located in China. They only banned activating foreign eSIM within China.
Thanks, It seems I misunderstood the restrictions when they were first introduced. The purpose appears to be preserving the Great Firewall by preventing Chinese citizens from easily bypassing it with a foreign eSIM. Unlike a physical SIM, which must be imported and activated abroad, a foreign eSIM could be downloaded directly onto a domestic phone, making circumvention much simpler. By restricting eSIM activation, authorities effectively require someone to import a separate device, such as an iPhone purchased overseas, and keep it alongside their domestic phone if they want to activate and use a foreign eSIM within China. I had first read about these restrictions when the iPhone Air was announced but not yet released, and at the time the rules were not clearly explained, which led to my initial misunderstanding. Thanks so much!
I activated a Thai SIM (True) inside of Europe before traveling no problem, so it's not a technical limitation. I think brands like Saily that specifically target travelers are also activated beforehand, so when you arrive you immediately have data.
True, both networks I have in Europe don't allow it. It's one of the reasons I don't like eSIM, there are a lot more restrictions than with real SIMs. With those I can simply pull one of my cards out of my phone and put it in my tablet or 4G modem for an hour while travelling. With eSIM I have to unregister it and get a new QR every time, registering it doesn't work abroad, and they can deny activation based on the device.
> Is that even a ban? I didn’t think eSIM activation typically roams — I thought it only worked on home networks.
While I was in the US, I swapped iPhones and successfully activated both NTT docomo and Au (KDDI) eSIMs while roaming. It definitely works when you're out of home network.
I don’t think Apple continues trying to be a “lifestyle brand” after their attempts with iPhone X, Apple Watch Edition. Apple products are not cheap but also not too far away from customers electronics.
reply