Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried a lot of these tools, including Turnitin, and I think they are all wrong. Not because they are a bad implementation, but just because the problem is naturally impossible in a lot of cases.

There are people whose style is closer to AI, that doesn't mean they used AI. And sometimes AI outputs text that look like a human would write.

There is also the mix: if I write two pages and I used two sentences by AI (because I was tired and I couldn't find the right sentence), I may be flagged for using AI. Even worse, if I ask AI for advice and then I rewrite it myself, what would be the output? I can make a reasoning that both (AI written and not AI written) would be wrong.



> There is also the mix: if I write two pages and I used two sentences by AI (because I was tired and I couldn't find the right sentence), I may be flagged for using AI.

None of these tools are binary. They give a percentage score, a confidence score, or both.

If you include one ai sentence in a 100 sentence essay, your essay will be flagged as 1% AI and nobody will bat an eye.


They are not binary but the score isn't linear in my experience either. It isn't that they assign a score to each sentence and then do an aggregation.


It's not, but the fact that one sentence deserves a high score doesn't automatically mean that entire thing will flag false positive. Unless it's like, two sentences in total.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: