Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's an arms race. As students cheat with ChatGPT, teachers should incorporate tactics throwing it off, perhaps using ChatGPT themselves.

I think what you are doing is great! Good luck going forward!



Can't they feed the code to ChatGPT and ask it to spot the mistakes, though?


I tried... I pointed out a problem and asked ChatGPT to fix it, unsuccessfully. I asked it for a proof of correctness, then pointed out a problem in its proof and asked ChatGPT to fix it, again unsuccessfully. (It's all in the notes I linked to.) Perhaps I'm just crummy at prompt engineering; or perhaps this is one of those questions where the only way to engineer a successful prompt is to know the answer yourself beforehand.


I've also had this issue multiple times where ChatGPT provides a flawed answer, is able to identify the flaw when asked but "corrects" it in such a way that the original answer is not changed. I've tried this for code it wrote, for comments on my code and for summaries of texts that I provided.


Reminds me of children trying to speak a word properly, but repeatedly making a mistake in the same way.


I can’t tell if people just don’t understand how ChatGPT works or if there is another reason they are eager to dehumanize themselves and the rest of us along with them.


I am aware no learning is going on live during the discussion with ChatGPT, nor are the mechanisms that lead to the similar outcome even remotely similar.

I also don't think humans are less human just because machines started making mistakes similar to human ones.

But I do see this similarity as a reminder that machines are becoming more human in an accelerating way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: