Connect with us

Education

Journalists Club – Nursing Education Network

Published

on

Artificial intelligence is increasingly getting used in medical devices, however it doesn’t at all times get the whole lot right — and sometimes its mistakes look convincing. This journal club article by Granstedt et al. (2025). Hallucinations in medical devicesdefines as probable errors made by artificial intelligence systems in devices, which could also be (e.g. a false change that changes the diagnosis) or (e.g. a minor additional feature that doesn’t impact care). Unlike traditional imaging artifacts that clinicians are trained to detect, AI hallucinations might be subtle, nuanced and difficult to detect even for experts.

The authors study hallucinations in , and utilized in health care. They argue that hallucinations can’t be completely eliminated – they’re a built-in limitation of current neural network methods – and that reducing them often comes at a value in performance.

: This creates latest security and management challenges, especially when multiple AI systems are interconnected during clinical workflow.

: AI devices haven’t got to be perfect to be useful, but regulators, developers and clinicians must actively measure the reliability and impact of errors, design higher assessment studies, and recognize hallucinations as a definite patient-significant risk in AI-enabled care.

Great, J., Kc, P., P., Dishpand, R., Garcia, V., & Badan, A. (2025). Hallucinations in medical devices. Artificial intelligence in life sciences100145.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending