Pages in this blog

Wednesday, March 30, 2022

AI has yet to take over radiology

Dan Eton, AI for medicine is overhyped right now: Should we update our timelines to AGI as a result? More is Different, March 29, 2022.

AI for medicine has a lot of promise, but it's also really overhyped right now. This post explains why and asks if we should update our timelines to existentially dangerous AI as a result (tldr: personally I do not).

Back in 2016 deep learning pioneer Geoffrey Hinton famously said "people should stop training radiologists now - it's just completely obvious within 5 years deep learning is going to do better than radiologists. It might be 10 years, but we've got plenty of radiologists already."

Hinton went on to make similar statements in numerous interviews, including one in The New Yorker in 2017. It’s now been 4.5 years since Hinton's prediction, and while there are lots of startups and hype about AI for radiology, in terms of real world impact not much has happened. There's also a severe shortage of radiologists, but that's besides the point.

Don’t get me wrong, Hinton’s case that deep learning could automate much of radiology remains very strong.[1] However, what is achievable in principle isn’t always easy to implement in practice. Data variability, data scarcity, and the intrinsic complexity of human biology all remain huge barriers.

I’ve been working on researching AI for radiology applications for three years including with one of the foremost experts in the field, Dr. Ronald Summers, at the National Institutes of Health Clinical Center. When I discuss my work with people outside the field, I find invariably that people are under the impression that AI is already being heavily used in hospitals. This is not surprising given the hype around AI. The reality is that AI is not yet guiding clinical decision making and my impression is there are only around 3-4 AI applications that are in widespread use in radiology. These applications are mainly used for relatively simple image analysis tasks like detecting hemorrhages in MRI and hospitals are mainly interested in using AI for triage purposes, not disease detection and diagnosis.[2]

Whoops!

As an insider, I hear about AI systems exhibiting decreased performance after real world deployment fairly often. For obvious business reasons, these failings are mostly kept under wraps. [...] Finally, we have IBM’s Watson Health, a failure so great that rumors of a “new AI winter” are starting to float around. Building off the success of their Watson system for playing Jeopardy, about 10 years ago IBM launched Watson Health to revolutionize healthcare with AI. [...] Well, earlier this year IBM sold off all of Watson Health “for parts” for about $1 billion.

Knowledge helps:

Radiologists have a model of human anatomy in their head which allows them to easily figure out how to interpret scans taken with different scanning parameters or X-ray images taken from different angles. It appears deep learning models lack such a model.

If robust radiological diagnosis requires such a model, that's going to be very hard for a machine to acquire. Not only does the machine have to acquire the model, it has to know how to use it. Those are not very well-defined tasks.

There's more at the link.

H/t Tyler Cowen.

No comments:

Post a Comment