## Google intervenes on AI health summaries Google has removed some of its AI-generated health summaries after a Guardian investigation highlighted the risk to users from false and misleading information. The decision was made after the newspaper discovered that Google's generative AI feature provided inaccurate health information at the top of search results, leading seriously ill patients to mistakenly believe they were in good health. ## Details of the inaccuracies Google disabled specific queries, such as "what is the normal range for liver blood tests," after experts contacted by The Guardian flagged the results as dangerous. The investigation highlighted a critical error regarding pancreatic cancer: The AI suggested patients avoid high-fat foods, a recommendation that contradicts standard medical guidance to maintain weight and could jeopardize patient health. Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible. Searching for normal values for liver tests generated raw data tables (listing specific enzymes such as ALT, AST, and alkaline phosphatase) lacking essential context. The AI did not account for patient demographics such as age, sex, and ethnicity. Experts warned that because the AI model's definition of "normal" often differed from actual medical standards, patients with serious liver conditions might mistakenly believe they are healthy and skip necessary follow-up care. Companies that develop generative AI models must pay close attention to the validation of health data, to avoid spreading misinformation that can have serious consequences for users' health.