Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not a scientist or someone producing technical literature for others to consume, but I feel like I'd want my humanity to come through on important topics like this. I love using LLMs but there are some tasks (like writing comments or texting friends) that I refuse to use them with, otherwise I'm applying `normalizeText(myUniqueText)`.

I'm sure there are deadlines and other constraints that cause professionals to reach for LLMs when producing their work, just like school-age kids use them for homework so they have more time to do whatever they'd rather be doing, so I can understand.



Ok, but what if your humanity expresses itself best in a language that only 20M people speak with no major biomedical publications?


> your humanity expresses itself best

I don't think these papers are about human expression; they're about communicating data and learned information in a way that can be used by others.

You might be thinking of the field of art, which also has strong opinions about AI in their field.


> I'm not (...) producing technical literature for others to consume, but I feel like I'd want my humanity to come through on important topics like this.

This will absolutely be just an opinion, but the kind of documentations I dislike the most are the ones that are (full of) arbitrarily structured prose. There's a time and place for self-expression and phrasing liberties; intricate, brittle, and especially long technical descriptions I don't think is one of them.


Definitely, I think I wrote imprecisely when I used "humanity". I meant more like I personally wrote it, the sentence structure and grammar are mine, the mistakes are mine, and hopefully it's still clear and easy to understand.

Speaking of mixing technical text and style/prose, I feel this link from yesterday did a great job executing both (granted it is an article, not a paper):

https://jrsinclair.com/articles/2025/whats-the-difference-be...


I think we're on the same page, it's just that I feel otherwise about this part:

> and hopefully it's still clear and easy to understand

The idea of leaving it up to fate whether it's clear and easy to understand what I write, and that the sentence structure and grammar mistakes aren't inhibiting or misleading understanding, terrifies me.

Now of course, it's not a terror a hearty serving of pressure, laziness, and overconfidence doesn't compensate for, so I usually just march on ahead and type what I have to type out nevertheless. But I do yearn for better.

Maybe the real deciding factor though is that I'm ashamed and insecure of my writing style rather than proud or appreciative of it, and that's why I'd rather cast it away, substituting it, than keep it. Hard to tell.


Great insight, definitely touches on another reason someone might reach for an LLM. Like some others pointed out, writing papers (especially not in ones native language) is its own complete task and with challenges and skills required beyond whatever the paper's topic is, and that's a totally valid reason to involve an LLM.


I'm sure many researchers have English as a second language, and rely on LLM to fix their grammar and vocabulary.


Which is horrifying: if an ESL author must publish in English and therefore does not have a full grasp of nuances and meaning conveyed by English wording, they should involve an editor... not a word machine that doesn't understand, either.

Wording matters when conveying information, ESL speakers should be working with fellow humans when publishing in a language they do not feel comfortable writing on their own.


As with many areas, it’s easier to recognize “correct” than to generate “correct.” When I lived in Germany I would often use the early online translation tools to help refine my written German and it was useful to see how they corrected it, and it was usually a matter of “of course that’s the right way, I see it now!”


I think you're generalizing widely between your experience and the ability of researchers publishing these millions of papers, which as of 2024 at least ~13% were being LLMed.

Seems important to know. LLMs lie and mislead and change meaning and completely ignore my prompt regularly. If I'm just trying to get my work out the door and in the editing phase, I don't trust myself to catch errors introduced by and LLM in my own work.

There is a lack of awareness of the importance of the conveyed meaning in text, not just grammatical correctness. Involving people, not word machines, is the right thing to do when improving content for publication.


Does the grant cover an editor's involvement? How much does the experiment need to be trimmed back and the sample size need to be reduced: how much data must be sacrificed to support that?


If no one on a given team fluently speaks, writes, and understands the cultural context of the required output language, that team will need to find a solution.

It should not be a word machine (which, I should point out, does not have a brain)

Solving this problem might just involve using some of the resources to support the output being correct in the required language. You can call that a "cost"




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: