Jingjia Chen, PhD, is a postdoctoral fellow at NYU Grossman School of Medicine and scientist with the Center for Advanced Imaging Innovation and Research. She develops methods for MRI acquisition and reconstruction in the presence of motion and investigates how the use of information from prior scans may improve new imaging sessions. Research led by Dr. Chen on sharper MRI of the liver was recognized at the 2024 motion correction workshop held by the International Society for Magnetic Resonance in Medicine (ISMRM); and a project she has led on distortion-free imaging of the prostate was distinguished at the ISMRM’s 2025 workshop on body MRI. For the latter work, Dr. Chen also won the young investigator award at the MRinRT 2025 symposium. Our conversation has been edited for clarity and length.
Before joining NYU Grossman School of Medicine for a postdoctoral fellowship, where did you do your PhD and what was the focus of your doctoral research?
I started my PhD in 2017 at UC Berkeley in the electrical engineering and computer sciences department. I had a background in physics, and my first two years I hopped on various interesting projects, including functional MRI and a signal processing pipeline for neuromodulation experiments. Later, I settled on a source-separation challenge in quantitative susceptibility mapping [QSM]. QSM is an MRI technique used in the human brain, showing iron or paramagnetic areas as bright—usually that’s deep grey matter—and areas of white matter as dark, because they’re richer in myelin, which is diamagnetic. It’s a really nice mapping technique but there’s a long-standing problem: a voxel is about one cubic millimeter, and that’s small enough for radiology purposes but really big if you think about the molecules inside. So you’re not just imaging the myelin or cell soma; you’re imaging a combination of them. In Alzheimer’s disease, iron deposition is known to be co-localized with protein plaques, and because they have opposing signs in susceptibility mapping, they might look neutral.
Because their signal is averaged over the voxel.
Yes, so I was given the problem of solving how much signal contribution is coming from the diamagnetic and paramagnetic parts of tissue by using the existing data acquired with multi-echo gradient echo, which is a commonly used sequence for acquiring QSM. The dynamics of the multi-echo components are intertwined as the echo goes due to two susceptibility sources, and people usually just average them. So I thought, why don’t we model it? And with help from multiple collaborators, we were able to develop and validate a model and algorithm called DECOMPOSE-QSM.
What was it about MRI research that encouraged you to dive deeper into the field?
I have been involved in research since I was in college at Nanjing University. At Nanjing, I got intensive training in research and studied chemistry, and physics—I coauthored a paper on biosensors using quantum dots, and also worked on trying to resolve the nonlinear dynamics of how the P53 protein regulates the cancer-forming process. In my junior year, I was lucky to have the opportunity to join a program at UCLA for ten weeks to perform research in a radiation oncology lab, where my project was to quantify the motion blurring caused by respiration on lung CT. And it struck me that, unlike in my research at Nanjing, I could solve a problem using physics and see the benefits of my work almost immediately. It felt both intellectually rewarding and practically valuable. So, for PhD, I applied to a lot of engineering and medical imaging programs, and I was lucky to get into UC Berkeley.
One difficulty I had when working on my PhD projects was that MR images are acquired by a hospital or imaging sites, and there are variations in parameters like echo time, repetition time, resolution, or people move during the scan or the noise can be bad or there’s too much acceleration—it’s hard to get the data as consistent as you want, scientifically. So, for validation of the DECOMPOSE-QSM algorithm, I started to do my own acquisition and realized that there’s a lot to work on.
I’ve always been interested in visual things. In college, I used to take photographs with a small camera where I would adjust the shutter, the aperture, exposure times, etcetera. I used to go to the countryside just to take pictures of the stars and the milky way. MRI to me is like a camera, and I’m interested in using it like a camera. Yes, we can do a lot of modeling and processing, but can we make the image better to begin with? I really wanted to dive into that.
Your principal investigator at NYU Langone is Li Feng. How did the two of you connect?
When I was about a year away from graduating, I started talking to people about how to approach a postdoc position. I didn’t have a strong preference for a specific project, but I had the general idea that I wanted to work on improving image quality by finding ways to obtain better images—take susceptibility: is there a way we can directly image the susceptibility? Can we directly image the myelin? The iron? Instead of having some multi-echo GRE and then doing filtering and then separating components, can we just directly map them? That sounds like an acquisition problem.
My PhD advisor, Chunlei Liu, sponsored my attending a Gordon conference. It’s a very casual, very laid-back conference. There’s hiking, there’s swimming, and you meet people and just start talking and connect. At one point there I was talking with Li Feng about my interest in MRI acquisition and reconstruction. I also met Dan Sodickson there, and with Dan the conversation is always inspiring.
They both encouraged me to try NYU Langone for postdoc training, and Li suggested that I try working on a small project on quantitative free-breathing liver imaging, which later became a power pitch at the ISMRM. Then he invited me here to give a talk and meet people in the department, and I got the feeling that this place is full of scientists—different from any university I’d approached where you have a few investigators, each leading a large base of students. This was more like a reverse pyramid, which I think is very unique for doing research because you can seek help and collaboration from experts in so many different fields. I thought it was a great opportunity and came here to join Li’s lab.
With both Dr. Feng and Dr. Sodickson, you’re working on some new ways to image longitudinally. That’s in some ways a very new and exploratory area. Can you talk about what imaging longitudinally means?
The rationale behind this is that we look similar over time. You recognize a person even if you haven’t seen them for a year or more, so there are some underlying features linked through time. But what are these features? It’s really hard to explicitly describe them. Translating this idea to the imaging field where repeated scans for monitoring disease progression are common, we’re looking at whether we can incorporate information from past scans into reconstructing current data. In this project, we started with 4D MRI dynamic imaging—
And 4D MRI means imaging over time during a single session.
Yes, for example, a free-breathing scan of about 5 minutes, where we combine the reconstructed frames into a short movie. But why don’t we zoom out and think about the whole trajectory of you from last year, last month, or next week as frames in a larger, continuous movie? From last week to this week, your breathing is still similar, you are still similar. So, can we use the dynamics information from the past to help us today?
And on a shorter timescale, that approach is already a part of dynamic imaging, because acceleration depends in part on the fact that your liver is similar to itself from a second ago. But how similar is your liver to itself from, say, a year ago?
If you’re an adult, your body doesn’t change much, so there is physiological continuity to explore. In this project, we incorporated a 2D navigator into data acquisition such that it closely tracks dynamic changes in the image. You can think of the effect of respiration on an imaging voxel as an oscillation of contrast change, and maybe you will have a small contrast step when moving to the next imaging session, but then it’s oscillation again. The underlying dynamics are going to look similar, and a lot of the voxels will share the same information, so we use principal component analysis to draw a low-rank representation that captures only the most informative components, and all the dynamics will be a combination of some of these. This is the concept of low-rank subspace reconstruction for dynamic imaging, and it’s allowing us to rethink dynamic imaging at a much bigger time scale—not just seconds or minutes but also days and months—and to make use of information from the past for the present. This is a proof-of-concept project, and we are excited by its potential.
And what is the advantage of being able to use past imaging information for future sessions?
One advantage is to accelerate. In order to have good 4D MRI reconstruction you need to have someone in the scanner for at least a few minutes to gather enough data or temporal correlation for reconstruction. But if there’s information from the past that can already be used, then a baseline is already there. So, for one, we can accelerate, which reduces table time. And for radiation therapy in the abdomen, where you need 4D MRI to do treatment planning and during the treatment itself, if you can do 30-second acquisitions instead of 3- to 5-minute acquisitions, that’s going to save so much time and cost.
Another thing is that if you’re relying on data from one session and there’s a problem—maybe the data is corrupted, or the lesion is not so well delineated due to artifacts—then past information could help you recover better images.
The bigger idea is that what we need to see in medicine is the change. Say you have a differently shaped liver. That’s fine if it’s functioning well. But if it starts changing, that may be a reason to worry.
And with the data you’re currently working on, what timespan are you looking at and what are you finding out?
For this pilot study, some data that are acquired very frequently, we’re looking at maybe a few weeks apart. There are also cases where people come in for follow-up scans after more than 200 days, so the range is very wide. Some people even have body-shape changes, but we can still incorporate their information longitudinally into reconstructions and it still works. Of course, this is traditional linear sub-space reconstruction, and we’re not boosting anything with AI yet.
You have also been leading research in which, together with colleagues, you have combined a number of MRI techniques and developed an imaging method that delivers distortion-free prostate imaging, addressing a long-standing problem in this area. Tell me about that project.
I was surprised that a prostate diffusion weighted imaging [DWI] sequence takes six or seven minutes yet the image quality isn’t that good—nothing near the quality of DWI images of the brain. I was very lucky to work with Li Feng, Dan Sodickson, and Hersh Chandarana, among other colleagues. Hersh allowed me to shadow him in the reading room and see how he reads clinical images. Without any training, I couldn’t see anything on the prostate DWI; to me, the resolution was poor, the distortion corrupted the shape of the organ, and the images just looked hazy in general. I really respect radiologists for being able to read these. I feel there’s significant room for advancement in body DWI, particularly given its importance in diagnosing prostate cancer.
So, it’s normal that prostate DWI looks grainy, noisy, and distorted?
The pelvic region is really big, and the prostate is at the center, which is harder to get good signal from. Also, so many things are constantly changing: there’s gas, peristalsis, and the patient may have implants nearby. I realized through discussions with Hersh and Li that there’s a huge need for distortion-free imaging. This would help with diagnosis and with MR-guided radiation therapy, which is a new area that’s rising now.
We wanted to use spin echo, because spin echo reverts the effect you get from B0 inhomogeneities, so you get distortion-free images. However, spin echo takes a long time, which means a longer scan and more motion, and comes with T2 blurring. The diffusion weighted imaging used currently in the prostate relies on single-echo echo-planar imaging since the 1990s. We were thinking, can we combine them? Gradient echo to help with speed, but not too much of it because of distortion; and spin echo to help with distortion, but also not too much because eventually we’d get blurring and other issues. So, what would be a sequence that has a short gradient echo and short spin echo? PROPELLER seemed to be a good option: each blade only covers a small part of the k-space, so we don’t need that much spin echo or gradient echo, and then we just repeat. And the good thing about PROPELLER is that you oversample the k-space center every time, so it’s intrinsically robust to motion.
The sequence I’m working on—we named it TGSE-PROPELLER-DWI with golden-angle rotation—is very nice and I’m very excited for it to be used one day in the clinic. There’s of course a lot of development and testing still to be done, and we are also looking into using AI to improve the signal-to-noise ratio [SNR] and reconstruction speed. This sequence opens up a lot of ideas.
Through your work on this project and the longitudinal imaging project we talked about earlier, do you think you’re learning what you set out to learn with the vision of using MRI to take better images at the beginning of the imaging chain?
Absolutely. My first few projects at NYU Langone were on sub-second dynamic contrast-enhanced MRI, which I presented last year at the motion correction workshop. Working with Li, I’ve had tons of opportunities to scan subjects, trying to balance the SNR and scan time, working through parameters to control fat suppression, and thinking about where the artifacts come from, how to correct for them—it’s all really fascinating. It’s also been very rewarding to see these projects spark interest among clinicians.
At NYU Langone we not only have strong collaboration among scientists but also a lot of MRI scanners throughout the system. You can have an idea and quickly find scan time to test it. We also have a great team of research coordinators who help with subject recruitment and an amazing team of MRI technologists. All this support—you don’t get that without a big center like this and without the leadership’s commitment to research. And the accessibility to doctors is awesome—they provide perspective on the clinical significance of my research and give me ideas about clinical needs, and I file those in the back of my head.
You sound so enthusiastic and animated when you talk about what you’re learning and about experimenting with scans and scanner settings. Where do you see your research going at this point in your scientific career?
I still feel that I have bigger ambitions, and at some point in the next five years I hope to lead my independent lab, build a team around it, and work on making imaging better. And I always want to remind myself to zoom out and think about the fundamental purpose of medical imaging and how we can improve the whole workflow of providing care.
Currently, radiologists review large volumes of images, integrate clinical information from other sources, and consult with colleagues on borderline cases. It’s a thorough process but there can be ambiguity and inefficiency there, which means there’s room for improvement, and I think we can do better.
Related Stories
Congratulations to Jingjia Chen and coauthors on winning second prize in oral presentations at the International Society for Magnetic Resonance in Medicine workshop on body MRI.
Congratulations to Jingjia Chen and coauthors on winning second prize in the scientific poster competition at the International Society for Magnetic Resonance in Medicine workshop on motion correction.