In the previous article, we heard about the possibilities of Artificial Intelligence or AI, going into space and exploring the more distant planets and moons in our solar system. But one of the problems that has to be ironed out before we can rely on them is to cure AI computers from hallucinating. What do we mean when we say a computer is hallucinating?
If you have used some of the many AI assistants which are attached to numerous computer programmes, you will probably have received a stupid answer to one of your questions. It appears that as AI becomes more “smart”, it is making more mistakes and giving out false information. This is termed as the AI is hallucinating.

OpenAI, who are the creators of ChatGPT, have done its own research on this subject. Their ChatGPT is now in its fifth generation. It has been found that its second generation was twice as likely to hallucinate as the previous version. In fact, version three was found to hallucinate around 33% of the time, whilst version four was up to 48%. It is important to note that this affects all AI devices at the moment. We are only using OpenAI as an example because they have released this research.
The reason for this is not that each generation is worse than the previous one. Each generation is giving more accurate answers, but as the number of right answers increases, so do the wrong ones. These AI chatbots use Large Language Models or LLMs. This means they have absorbed huge amounts of information. The crux of the matter is to understand that an AI hallucination is not an error in itself; it is a feature of how the system operates. It is similar to the way we humans try to solve problems.
This similarity to the way humans calculate answers lies in not deciding what is the correct solution based on probability; in other words, going for what the most likely answer is. Instead, a reasoning AI device will break any problem down into its component parts and attempt to solve each of them separately.
Can we not just design it not to operate that way. The thing is, such devices have to sometimes hallucinate in order to become more creative. Otherwise, it would just be looking for answers to specific questions only using what it has already absorbed. In this way, the AI can become better, to improve itself and continue to improve on presenting more novel and crative solutions to problems.
This leaves us with a problem. An AI computer system will give you false hallucinated answers in just the same fluent way as it will deliver correct solutions. This means that it can become very difficult to know when an answer is the correct one. I suppose though, that with more and more practice, the correct answers will become scarcer. It is a hope at least.