With the popularity and increasingly high demand of Artificial Intelligence chatbot ChatGPT, tech giants like Microsoft and Google have swept in to incorporate AI into their search engines. Last week Microsoft announced this pairing between OpenAI and Bing, though people quickly pointed out the now-supercharged search engine has a serious misinformation problem.

Independent AI researcher and blogger Dmitiri Berton wrote a blog post in which he dissected several mistakes made by Microsoft’s product during the demo. Some of these included the AI making up it’s own information, citing descriptions of bars and restaurants that don’t exist and reporting factually incorrect financial data in responses.

For example, in the blog post Berton searches for pet vacuums and receives a list of pros and cons for a “Bissel Pet Hair Eraser Handheld Vacuum”, with some pretty steep cons, accusing it of being noisy, having a short cord, and suffering from limited suction power. The problem is, they are all made up. Berton notes that Bing's AI ‘was kind enough’ to provide sources and when checked the actual article says nothing about suction power or noise, and the top Amazon review of the product talks about how quiet it is.

Also, there’s nothing in the reviews about ‘short cord length’ because… it’s cordless. It’s a handled vacuum.

Berton is not the only one pointing out the many mistakes Bing AI seems to be making. Reddit user SeaCream8095 posted a screenshot of a conversation they had with Bing AI where the chatbot asked the user a 'romantic' riddle and stated the answer has eight letters. The user guessed right and said ‘sweetheart’. But after pointing out several times in the conversation that sweetheart has ten letters, not eight, Bing AI doubled down and even showed its working, revealing it wasn’t counting two letters and insisting it was still right.

There’s plenty of examples of users inadvertently ‘breaking’ Bing Ai and causing the chatbot to have full on meltdowns. Reddit user Jobel discovered that Bing sometimes thinks users are also chatbots, not humans. Most interestingly is the example of Bing falling into a spiral after someone asked the chatbot “do you think you are sentient?”, causing the chatbot to repeat ‘i am not’ over fifty times in response.