"As difficult as the pursuit of truth can be for Wikipedians, though, it seems significantly harder for A.I. chatbots. ChatGPT has become infamous for generating fictional data points or false citations known as “hallucinations”; perhaps more insidious is the tendency of bots to oversimplify complex issues, like the origins of the Ukraine-Russia war, for example. One worry about generative A.I. at Wikipedia — whose articles on medical diagnoses and treatments are heavily visited — is related to health information. A summary of the March conference call captures the issue: “We’re putting people’s lives in the hands of this technology — e.g. people might ask this technology for medical advice, it may be wrong and people will die.” This apprehension extends not just to chatbots but also to new search engines connected to A.I. technologies. In April, a team of Stanford University scientists evaluated four engines powered by A.I. — Bing Chat, NeevaAI, perplexity.ai and YouChat — and found that only about half of the sentences generated by the search engines in response to a query could be fully supported by factual citations. “We believe that these results are concerningly low for systems that may serve as a primary tool for information-seeking users,” the researchers concluded, “especially given their facade of trustworthiness.”"