Categories
Lab Talk

Santiago Coelho on Diffusion, Serendipity, and Underexplored Sources

Santiago Coelho, postdoctoral fellow who develops diffusion MRI methods for brain imaging, talks about modeling tissue properties, entering the field by chance, and what he proposes to do next.

Santiago Coelho, PhD, is a postdoctoral fellow at NYU Grossman School of Medicine and scientist at the Center for Advanced Imaging Innovation and Research. His work focuses on the development and evaluation of diffusion MRI methods for characterizing brain tissue microstructure. Dr. Coelho is a member of the MRI biophysics group at NYU Langone Health and his work has been recognized with numerous magna cum laude, summa cum laude, and best presentation distinctions by the International Society for Magnetic Resonance in Medicine. Our conversation was edited for clarity and length.

How would you describe your area of research in general terms?

My research lies at the intersection of physics and computer science, and the general idea is to use the random motion of water molecules inside the brain as a probe for tissue microstructure. Essentially, you have membranes in tissue that make water move in certain ways, we measure how the water moves, and we want to recover properties of the membranes and the tissue from that motion. And we use magnetic resonance to measure this noninvasively.

By analogy to a diffusing water molecule, you could think about a Roomba vacuum cleaner moving randomly around the house. If you don’t know where the furniture is but you look at the path of the Roomba a posteriori, you could probably locate the L-shaped sofa. The bit that makes it more complex in practice is that there are many, many water molecules, and we have to use a lot of physics and statistics to unravel the information contained in diffusion measurements.

What kinds of properties and how many are you potentially able to infer?

Some of these can be, for example, the density of axons, which have a very distinct signature in diffusion MR signal, so we can measure what their proportion is, and that’s a potentially good biomarker for axonal loss and diseases that change axonal density.

Another useful metric is axonal orientation. Diffusion inside the brain is anisotropic, so water molecules have higher probability of displacement in some directions than they do in others. This makes sense because for the water inside axons—long, narrow tubes—it’s much easier to move along the axons than perpendicular to them. That’s how we can reconstruct the distribution of orientations and do fiber tractography.

There are other properties, like water diffusivity, which is essentially the speed at which water molecules can traverse. When water encounters barriers, it bumps into them, goes back, bumps into them again, and then eventually goes through, so, on average, the speed becomes slower. Water diffusivities are related to how many barriers you have, and these can be potentially related to specific pathological processes that affect the beadings or undulations of axons. And there are more.

How do you know that you are measuring the properties you think you are?

Yes, how do we know what we’re measuring? There’s two bits there. First, the models that we use to relate tissue properties to the diffusion signal are really complex and nonlinear, and we need to design measurements that can capture the information we want to extract, so that in experimental conditions where we have noise and a lot of artifacts, we can actually extract that.

Second, models are simplified pictures of reality, so how do we know that our simplified picture is actually correct? In order to be able to claim this, we need to perform something called model validation to test whether the assumptions our model relies on make sense or at least hold complementary measurements. This is really hard to do, because the only way to measure water diffusivity in the brain in vivo, as far as I’m aware, is through diffusion MRI. But validation that has been carried out earlier by people in my group—by Dmitry Novikov and Els Fieremans and Jelle Veraart, for example—was about going into specific measurement regimes, where the model would predict that the diffusion signal would behave in a very specific way, and then doing the measurements in that very specific regime—and if the prediction happens to hold, it’s a big consistency check that at least that assumption holds.

Another way of validating is going against orthogonal modalities like microscopy. Some models we work with are aimed at measuring axonal density or the coherence of axonal bundles, so we take 3D electron microscopy in which we can measure this in a completely different way—for example, in our lab that’s work that Ricardo Coronado-Leija is doing together with Dmitry and Els—and compare with the diffusion models.

This is still in process in the field. There’s much more agreement on white matter models. For most of these, the main building blocks have been validated and we’re at least aware of measurement regimes where some approximations are fine to do. Grey matter is a bit more complex to model, and we still need to do some more validation work there.

The consensus that you say has emerged around modeling diffusion in white matter, how does it relate to the standard model of diffusion in white matter?

I would not call it a consensus but an implicit agreement that a lot of research groups have independently arrived at.

The main building block here is that axons are modeled as impermeable tubes of negligible diameter and the signal from a given voxel in the brain comes from adding the signal contributions from all the axons inside that voxel. The axons can have different orientations, which give rise to different fibers in a voxel, which you can then use to build tractography. And each of the tissue compartments, axons and the space surrounding them, can have its own water diffusivity. That’s the most general picture of white matter that everyone kind of agrees on.

So, the key—or at least a key—element of the standard model is an axon idealized as a straight zero-diameter impermeable object. Yet, we’re talking about a physical structure. It’s strange to think of it as a zero-diameter object.

It sounds tricky, and that’s why we have a word for it: a stick. Because if you call it a cylinder, you’ll immediately think, what is the diameter? The point is that in clinical settings we cannot distinguish diameters of one micrometer from half a micrometer from zero micrometers, so it’s just a good modeling assumption to consider axons as having no diameter. It’s like saying that the Earth is locally flat—that in a patch of 100 by 100 meters you can make that approximation.

And how does that change when you’re working with the types of research scanners that have the most powerful gradients, like the Connectome?

With a Human Connectome or Connectome 2.0, which we’re expecting to have at our institution, one of the things we would be able to do more often is to measure axonal diameters, and that can potentially be a useful biomarker for some diseases. You just need a very powerful machine for this. There is also work from our group on measuring axonal diameters on really high-performance gradients. The thing about clinical scanners is that you’re in a more limited environment in terms of data acquisition and hardware, and the approximation of reality changes with the sensitivity of your measurement device.

Can you talk a little bit about the standard model imaging toolbox that you developed and that our research center makes available?

The toolbox is a software we developed to extract the main features of the standard model, to do parameter estimation on a range of datasets: you can have conventional diffusion encoding, high diffusion weightings, multi-dimensional diffusion encoding, and diffusion relaxometry encoding. The idea behind that toolbox was to provide tools—that were not available at the time—for completely unconstrained diffusion estimation for the standard model.

Diffusion models have a range of degrees of freedom, and a lot of tricks people would do to get reliable estimates were to fix the values of certain parameters in order to get the other ones robustly. But the problem with fixing some values is that if these values vary across regions or in pathology, you will project whatever the differences are onto the values of other parameters.

In other words, you’ll introduce assumptions.

You’ll introduce assumptions and spurious biases that you’re not controlling but that the anatomy is controlling for you. For example, in mild traumatic brain injury, we think that axonal diffusivity is one of the things that change because there’s axonal beading due to the axonal injury. But some models fix this value to be constant throughout the brain, and the moment that value changes you’re going to get altered values for other parameter because the model will try to adapt the signals to match the measurements. That’s why we developed the toolbox.

Diffusion MRI was part of your doctoral research—how did you get into this area of inquiry in the first place?

In my PhD I worked with the standard model, and one of the things I did was combine single diffusion encoding with double diffusion encoding, which, roughly speaking, is measuring diffusion along a direction and combining it with diffusion along a plane. Those two give you complementary information, which allows you to solve the inverse problem nicely. One of the issues with the early standard model was that you could find infinitely many solutions to any set of measurements, and yet there is only one tissue. My approach to this was to combine two sets of measurements in order to guarantee the uniqueness of the solution.

The reason I started this was pretty much serendipity. I was doing my undergrad in Argentina and didn’t know what to do with life. Out of the blue, a former professor of mine who had moved to the University of Leeds in the United Kingdom sent me an email saying he had funding from the U.K. Engineering and Physical Sciences Research Council that included a PhD scholarship to work on diffusion MRI, and he wanted me to join him. I honestly hadn’t heard of diffusion MRI before, but I thought that it would be an interesting experience, so I jumped in.

How did your undergraduate education in electrical engineering translate to your ability to start research in diffusion as a PhD student?

Undergraduate coursework in Argentina takes five years and electrical engineering is very strong in mathematics, so I had a really solid signal-processing background. And my PhD co-advisor was a theoretical physicist who used to do relativity—he had a really strong physics toolbox, and I learned a lot from him.

There’s a consistent thread between your doctoral work and your postdoctoral research. Tell me how you made the connection between Leeds and NYU Langone.

In 2017 I went to Honolulu for my first ISMRM and presented my early results in double diffusion encoding for the standard model. I had a digital poster, and Sune Jespersen was there asking a lot of questions. I also met Els Fieremans and we talked a lot. I was familiar with papers from this research group—from Els and Dmitry—so, I knew who they were. After that ISMRM, I started collaborating with Sune, and we had a paper published in Magnetic Resonance in Medicine that got really good feedback from the community.

Again, serendipity: while I was working with Sune, he came to NYU Langone for a sabbatical to collaborate with Els and Dmitry, and somehow through email invites he mentioned that I could stop by. I came for a three-month research visit, which was super fun, and they were interested in hiring me as a postdoc. I went back to Leeds to write my thesis and then returned to New York in January 2020 to start a postdoctoral fellowship.

By now, your scientific studies and research have taken you from Argentina to the U.K. and to the United States. What are your thoughts on the different research environments that you’ve been a part of along the way?

To be fair, I didn’t really work as a researcher in Argentina, although I did my undergrad there. From the scientists I know, I think there are a lot of great people there but also some resource limitations. Things in the U.K. are, I think, quite similar to the U.S. but I’d say there’s a bit more competition in the U.S. system than in the U.K., maybe with more emphasis on getting grants.

Do you mean that researchers in the U.S. are encouraged or incentivized earlier in their careers to compete for grant funding?

Yes, I would say so, although I don’t know whether that’s generalizable or just my own perspective.

You’re speaking from direct experience: you have a proposal for a K99/R00 award that is currently under evaluation. You’ve coauthored many research papers and delivered presentations at scientific conferences, but this is your first grant proposal. What was it like for you to write it?

It was a huge learning experience. A lot of people at our research center—including my mentors Els and Dmitry—helped me with feedback. The best part of it to me was the exercise of having to think ahead for five years and having to come up with a concise and self-consistent strategy of what I want to do. I already had some of the things in my head, but forcing myself to write them into a proposal clearly added a lot of structure, which was very helpful.

And in this application, what are you proposing to do?

What we typically do when scanning is to measure diffusion in different directions at a fixed diffusion weighting—we call that a shell. This makes it very easy to compute rotationally invariant metrics from the data, meaning to convert measurements that depend on the direction in which you’re sampling to things that do not, because tissue properties do not depend on direction. The problem with this is that it’s really time consuming. For example, if you need 30 directions in one shell and 40 in another, you’re measuring two different diffusion weightings but you spend 70 measurements.

Additionally, even though in a single diffusion scan we acquire multiple images sensitized to diffusion in different directions and with different weightings, these are images of the same brain. But conventional image reconstruction for diffusion imaging reconstructs each diffusion volume separately; it solves the inverse problem of going from k-space to brain images independently for each image. This is not only hard but suboptimal since it imposes a big limit on the minimum sampling you can do to obtain good images.

We came up with a way to overcome these two limitations, and do what we call zero shell imaging. The first thing we do in this approach is q-space undersampling—because q-space is the space of diffusion weighting. Undersampling allows us to get more rotationally invariant information while dropping the constraint of having to acquire data in shells. But we still need a lot of diffusion weighted images, which still takes a lot of time.

So the next step, which is the key ingredient of this grant proposal, is to combine this undersampling in q-space with undersampling in k-space. Because right now in diffusion we do a lot of k-space undersampling—for example, with GRAPPA or SMS, which are in-plane or through-plane acceleration techniques—but we always solve the inverse problem of going from k-space to brain images individually for each of the diffusion-weighted images. And yet we know that it’s the same brain, it’s the same tissue, so there’s a lot of redundancy there. What we propose is to exploit this redundancy among diffusion contrasts and do a joint reconstruction of the different rotational invariants directly from the different k-space trajectories.

So, by varying the q-space and the k-space jointly but knowing that there are some constraints between them—and having models—you’re able to get to enough q- and k-space data without having to acquire and reconstruct the latter over and over.

We have a framework for this which we call zero-shell imaging, and the idea is that it has very mild assumptions for which we have some independent validation that kind of guarantees that they work, at least for brain tissue, and this approach is the glue that holds everything together and allows us to do this.

What benefits does this potentially translate to?

There are two directions you could go: one is that you could take a protocol that is currently too long to be clinically feasible and push it into a clinically-feasible timeframe; or you can take a protocol that is currently not feasible at all and create something reasonable for neuroscience research. So, in principle you could push both frontiers.

And the k-space bit—it also helps with resolution, because the moment you can undersample spatially, you can then push for higher resolution, which is one of the key drawbacks of diffusion MRI, that since we kill the signal in order to measure diffusion, we lose a lot of SNR and we end up going to larger voxel sizes.

So, with this technique you’re poised to image faster and at higher resolution, which sounds like it shouldn’t be possible because it seems that there should be a trade-off between the two.

Yes, there is a trade-off, which will probably limit the total gain you can get in both areas, but the way we can get away with higher speed and higher resolution than we currently have is through the unexploited redundancy in the diffusion dataset: different diffusion-weighted images of the same tissue are each reconstructed from identically sampled k-space points as completely independent inverse problems. That’s a hugely underexplored source of shared information.


Related Resource

Related Stories