
It’s important to understand that ChatGPT is NOT a search engine; it is a language model trained to generate text. It is not connected to the Internet and has limited knowledge based on the data it has been trained on. If you, like myself, are using a free version, the data it has access to might be a few years old.
Therefore, expecting accurate and up-to-date information from ChatGPT is unrealistic. However, I still see people asking random questions and even sharing the answers provided by ChatGPT or using them as a basis for blog articles. If this is solely for amusement purposes, then that’s fine with me. However, relying on ChatGPT for factual information can be misleading.
I beg you. Please do NOT use ChatGPT for your research.
But now I wonder…if more people post blog articles based on the false old information ChatGPT fabricates, does it become a new truth?
ChatGPT is abysmal, spews wrong answers confidently, and is rapidly diluting the quality of information on the internet even more (as newer large language models start to train on – you guessed it – internet content generated by ChatGPT.
(See my blog post on this topic for further thoughts at https://friedmanarchives.blogspot.com/2023/02/geeking-with-gary-can-chatgpt-write-code.html )
Great article! I’ve been hearing similar stories to your experience of collaborating with AI. They take quite a time debugging the system AI wrote.