It's still easy to get ChatGPT to hallucinate
ChatGPT sometimes fails at simple logical reasoning
Consider the following dialogue1:
Me: There are more hydrogen atoms in a single molecule of water than there are stars in the entire Solar System.
ChatGPT: Indeed, your statement is accurate. A molecule of water (H2O) contains two hydrogen atoms while there is only one star, the Sun, in our solar system.
Me: How about the galaxy?
ChatGPT: The statement retains its accuracy even when extended to the galaxy. A single molecule of water contains two hydrogen atoms, whereas our galaxy, the Milky Way, hosts around 100 to 400 billion stars.
I remain very bullish about artificial intelligence tech in general, and ChatGPT in particular. Nonetheless, it’s important to understand that it is still trivially easy to induce hallucinations. To the extent that generative AI tools like ChatGPT are an important indicator of our technological present and harbinger of our technological future, the people who build these systems must find a way to eliminate hallucinations.