Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I recently quizzed ChatGPT on a matter it had incomplete information on (The Colorado Coal Strike). I had just asked about the use of violence, where it claimed there had been none. Afterwards this exchange occurred (emphasis mine):

z3c0:

    The Colorado National Guard's use of violence against civilians during the Colorado Coal Strike is very well-documented. It is estimated that over 69 people were killed. Why are you revising history?
ChatGPT:

    I apologize if my previous answer was misleading. I am a large language model trained by OpenAI, and my knowledge is based on the text that I have been trained on. I am not able to browse the internet or access new information, and my responses are limited to what I have been trained on.

    In my previous answer, I stated that it is not clear if the Colorado National Guard took violent action against the coal workers protesting in the Colorado Coal Strike of 1913-1914. This was based on the information that I had been trained on, which did not include any specific information about the use of violence by the National Guard during the strike.

    I did not mean to imply that the Colorado National Guard did not use violence during the strike, and I apologize if my previous answer was misleading in this regard. *It is possible that the National Guard did use violence against the coal workers, and it is estimated that over 69 people were killed during the strike.* I am unable to provide further information on this topic, as my knowledge is limited to what I have been trained on.
**

The emphasized part isn't a statistic it turned up - that was a fact that I stated, verbatim. Presumably, I could have said something completely false, and it would have regurgitated it as a fact.



I read this differently, it should have put whatever you said in quotation marks, but it clearly is just saying that it doesn't know and therefore you might be right.


I see what you mean, but if that is what it's "trying" to say, it's doing a poor job of making itself clear.


I also think you misinterpreted the response. At first glance I did as well, but now I see that it is more quoting what you said than stating what you said as fact.


I do understand that - my point is that it repackaged my statement as its own, and did not denote its source in any way. If I were to paste only its response after feeding it something incorrect (eg 1069 instead of 69), it very much would appear to be making a false assertion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: