Lab Talk

Florian Knoll on Ideas Ahead of Their Time, Hallucinations, and the Next 10 Years

Florian Knoll, incoming chair of imaging at University of Erlangen–Nuremberg, talks about his background, not being greedy, and why he does what he does.

Florian Knoll, PhD, was among the first developers of machine learning reconstruction of magnetic resonance images from raw data. He is an outgoing assistant professor of radiology at NYU Langone Health and the incoming chair of imaging at Friedrich-Alexander University Erlangen-Nürnberg (FAU). This conversation has been edited for clarity and concision.

You joined NYU Langone in 2013. Why did you choose this research group in particular and what were your aspirations back then?

I was doing a postdoc as continuation of my PhD in Graz University, in Austria, and my goal was to go abroad for a couple of years. I spoke to Tobias Block—we had met at the ISMRM in the first year of my PhD—and Tobias put me in touch with Dan Sodickson. Of course, I knew Dan’s papers, but when we talked, I was just blown away by Dan’s vision, his presence, his personality.

I had other options for postdoctoral training abroad and was indifferent about New York, but the deciding factor was that Dan was the person I wanted to work with.

Initially, your research at NYU focused on iterative reconstruction and compressed sensing techniques. Today your work revolves mainly around machine learning reconstruction of MR images. When did you first begin thinking of this application of artificial intelligence and how did that happen? 

When I started my PhD, I had no idea about imaging at all. The reason I got into MRI was that the type of optimization involved in inverse problems in imaging was very similar to the optimization problems I had worked with during my master’s in pattern recognition—that was my entry to medical imaging and to MR. So, because I had prior training involving neural networks, support vector machines, and those types of things, it was always kind of in the back of my mind: what can we do with machine learning? The connection was there. 

But it did not become specific until 2015. I was at the Applied Inverse Problems Conference in Helsinki and met my old colleague Thomas Pock from Graz University of Technology Institute of Computer Graphics and Vision. We started talking. Deep learning was breaking through in computer vision, and we decided we wanted to give it a try in MR image reconstruction. Around that time, Kerstin Hammernik started a PhD in Thomas’s group, and she began working on this—running experiments and really giving it a try. We had almost daily calls between 2015 and 2017. That’s when my research shifted and this became my main project.

During those two years, machine learning was not yet on everybody’s radar in the imaging community. My first R01 grant proposal about this was a classic case of something being too early for its time. All the reviewer comments were: this is a crazy idea, it can never work, and just doesn’t make sense as a concept—why would you even want to do machine learning image reconstruction?

But here again, which is telling, the very first time I talked to Dan Sodickson about this in his office—maybe ten months to a year after we started working on it with Tom and Kerstin. Dan immediately saw it. It’s this skill that he has, that when he hears an idea that he finds exciting, he’s totally behind it. 

[Florian Knoll and Thomas Pock resubmitted their proposal, Learning an Optimized Variational Network for Medical Image Reconstruction, and the NIH granted R01 funding for the research in 2018. On the eve of the publication of this post, a search on NIH RePORTER for “machine learning” returned more than 140 active projects funded by the National Institute of Biomedical Imaging and Bioengineering.]

Florian Knoll, PhD; Kerstin Hammernik, PhD; and Thomas Pock, PhD, in the audience at the annual meeting of the ISMRM in Paris, France, in 2018.
From left: Florian Knoll, PhD; Kerstin Hammernik, PhD; and Thomas Pock, PhD, in the audience at the 2018 meeting of the ISMRM in Paris. Photo: Pawel Slabiak/NYU Langone Health.

What what do you think is the ultimate end of this particular line of inquiry or development?

A big milestone will be when all the major scanner vendors have this technology available as a clinical product. When it gets used on a daily basis and makes the life of people who are in pain in the scanners better, then I will say I’m very happy. Because that’s why I’m doing this work and not, say, number theory or portfolio management.

But in terms of the technical development, it is a continuum. There’s still a lot of room for improvement and so many things that I and the entire field are now working on: improving the technology, making it more robust, more flexible in terms of applications, understanding the theory better, investigating the clinical potential. So, I hope this will keep us busy for the next 10 years before we say it’s done and move on to the next thing.

In reactions to research on machine learning reconstruction of MRI, people often comment on so-called hallucinations—there is a preoccupation with them. What is your current thinking on hallucinations?

Yes, it’s a valid point. Our research team was actually arguably among those most critical of our own work. That’s naturally what happens when you start doing something that you’re so much more familiar with than most other people: you see the limitations very early on. We have written at least two papers where we specifically talk about what happens and what can go wrong.

The biggest problems, essentially, are whether there is a pathology that you don’t see and whether you falsely report a pathology when there is none. This is not necessarily a new discussion that has just appeared now with machine learning. It’s the same discussion that people had with compressed sensing, with the development of non-linear image reconstruction. 

The really critical part is that, in my experience, radiologists are fine with imaging artifacts as long as they can clearly recognize artifacts as such. If an image looks terrible, they can see something has gone wrong and will not trust the image or its affected areas. And this is where the machine learning models are challenging, because they have the ability to get you images that are essentially indistinguishable from what would be a conventional scan. What people refer to as hallucinations are the types of artifacts that you can no longer at one glance identify as technical errors. That is a challenge.

The reason it happens, in nine out of ten cases I’ve seen, is that people get greedy. For every new method that you have, there is a point where it will bend and a point where it will break—and you can break anything. In our knee project, our goal is to reduce scan time from a 10-minute protocol to a 5-minute protocol. We spend a lot of time finding and fine-tuning the operating point where you can still get reasonable results. But if you go beyond this—if you say: I’m not satisfied with five minutes, I want one minute—then the thing breaks. It breaks just like any other technology but in a critical way from a diagnostic perspective, because it’s hard for the radiologist to determine that something has gone wrong.

You are about to assume the position of chair of imaging at FAU. What is your vision for this new role and are there particular aspects of the work you’ve done at NYU Langone that you feel have helped you prepare for this next step?

My goal is to build something there that roughly resembles CAI2R, and not just in terms of the work projects—which are obviously important—but mostly in terms of the culture.

This collaborative, open culture that we have, where people like working together and where collaboration is encouraged, where individual research groups talk to each other and people have a certain freedom to pursue their own ideas and work across teams—this is exactly the type of culture that can bridge basic science, engineering, and medicine.

My main appointment at FAU is at the school of engineering but with a joint appointment at the school of medicine, working with clinical radiologists and building exactly this kind of bridge. The arrangement I envision may be somewhat analogous to the ongoing research partnership between our Center at NYU Langone’s radiology department and NYU Center for Data Science.

During my PhD in Graz, I was in the engineering school and I also worked with people at hospitals. Then, in New York, I’ve been in the other role—at the radiology department. And now in Enrlangen-Nüremberg, I’ll be back at the engineering school. I feel that I have seen both sides now, and that’s why I hope that I can make it work and get a similar collaborative environment going there and cultivate a group of people with the same spirit and culture that we have here. 

Related Resource

  • Raw k-space data and DICOM images from thousands of MRI scans of the knee, brain, and prostate, curated for machine learning research on image reconstruction.

Related Stories