Categories
Honor Roll Lab Talk

Ilias Giannakopoulos on Matrices, Electromagnetics, and Role Models

Ilias Giannakopoulos, postdoctoral fellow in MRI, talks about how electromagnetic waves interact with the body, why matrix compression matters, and where he finds inspiration.

Ilias Giannakopoulos, PhD, is a postdoctoral fellow at NYU Grossman School of Medicine. He recently led the development of a new way to compress what is known as the coupling matrix in volume-surface integral equation formulations. The work, published in 2022 in IEEE Transactions on Antennas and Propagation, has been distinguished by the IEEE Antennas and Propagation Society with the 2023 Harold Wheeler applications prize paper award, presented in July at the society’s 2023 symposium in Portland, Oregon. The advance is a result of a collaboration between scientists at NYU Langone’s Center for Advanced Imaging Innovation and Research and colleagues at MIT’s Research Laboratory of Electronics. The new compression method “is critical to enable numerical MRI simulations at clinical voxel resolutions in a feasible computation time,” write the authors. Our conversation was edited for clarity and length.

How would you describe the focus of your research? 

In general, I’m interested in the application of computational methods and algorithms to problems in biomedical imaging and bioengineering. Inside an MRI, the transmit coil and the receive coil interact electromagnetically with the body, and it’s important to model these interactions to ensure that the coils are safe and produce good anatomical images. My research develops methods to perform these simulations faster, with higher memory efficiency and high accuracy. 

Accuracy is important because you want simulation results to be close to what you get in reality. Low memory, because we have finite computational strength. And time—time is the most important, especially for ultra-high-field MRI where we still have a few limitations. If your simulator has a low time footprint, you can explore alternative, novel coil designs and then build coils that will allow the sort of images that we cannot get yet.

Can you expand on the current limitations of clinical imaging at ultra-high magnetic field?  

With the 7-Testa MRI, we can look at the brain and body extremities but not at the chest and the abdomen—what radiologists call body imaging. What happens is that the scanner operates at a high frequency due to its high magnetic field, so the body’s dimensions are higher than the wavelength, which means you have strong interactions with the radiofrequency fields.

And the modeling of the interactions between the electromagnetic field and the body becomes more important for ultra-high-field MRI because the shorter wavelengths create a more complicated environment, making it harder to understand the magnetic fields’ distribution throughout the body—is that right? 

Actually it’s important for all frequencies because it can allow you to build coils for all scenarios—since you can also model the signal-to-noise ratio with electromagnetic simulations. But yes, it’s most important for ultra-high field, because there you have all these short wavelengths, and there is an actual need. 

One of your recent published articles on working with volume surface integral equations (VSIE) was just distinguished by the IEEE Antennas and Propagation Society. Can you talk about this particular area of your research and the mathematical and computational techniques involved?

In MRI we have a coil, what we call surface scatterer—a surface that radiates electromagnetic field. We also have the body, which is a volumetric scatterer. And these two objects create a coupled system of electromagnetic interactions—what we call a volume-surface system. So, how to model that and actually know what’s happening? 

We can solve Maxwell’s equations that model the electromagnetic field but due to speed, memory, and accuracy reasons, we use an integral equation. The integral equation forms of Maxwell’s equations are highly customizable to when you’re dealing with single-frequency problems—for example, MRI. Because we cannot solve analytically Maxwell’s equations for these complicated geometries of the human body and the coil, we discretize both objects. 

When you say discretize, you mean break the objects down into idealized shapes that you can work with mathematically?

Exactly. For a surface, discretization could be some triangles; and for the body, voxels in three dimensions. Then you form the interactions between all the triangles and all the voxels, and this creates a very big matrix. The number of rows is the number of voxels in the body, and the number of columns is the number of triangles on the coil. Because we’re dealing with a surface, the triangles can be small and many—up to ten thousand or more. The voxels in the body—if you have a hundred or two hundred voxels per Cartesian dimension—can number up to a million. So, the matrix is quite intractable to form.

When we’re talking about voxels—in clinical imaging a voxel can be a millimeter, sometimes less. 

Yes, we use predefined dimensions from five millimeter to one millimeter. We want to go towards the clinical resolution because the simulation becomes more accurate, so ideally you’d prefer to solve everything at one millimeter. But the matrix is a very big object. At five millimeters, maybe you can model it but when you go to two or one, it’s almost impossible. 

The literature uses matrix decomposition approaches in attempts to create a low-rank version of the matrix, basically by reducing the dimensions of the triangles. That can reduce the matrix by ten to fifty times, maybe. But if you still have one million voxels or more, it’s still intractable. You might be able solve the two-millimeter problem with a high memory footprint, but not the one-millimeter problem.

In other words, these matrices are so large that even reducing them by as much as half still results in a size that is computationally infeasible. 

Yes, with one-millimeter resolution you may still need a terabyte of memory, and if you model the full matrix, you may reach the petabyte range. So, here comes our contribution. 

The body is a three-dimensional object so we saw that we can use three-dimensional tensor decomposition methods that basically create a low-rank approximation of the original three-dimensional tensor. We did that initially and we saw that we can reduce the memory requirements quite a lot, bringing the matrix down to maybe a couple of gigabytes. This was our initial approach and the first result in our paper. But then we realized that you first need to build the matrix in order to compress it, so that would still be intractable. You could do it iteratively—create only a few columns, compress them, create the next few columns, and so on. 

We combined this approach with tensor decomposition and adaptive cross approximation methods. Adaptive cross iteratively creates a low-rank approximation of a matrix without requiring the full matrix—it just requires a function that can build the rows and columns. We apply the cross approximation method and when we’re building the columns we immediately compress them using the Tucker decomposition. So, we can actually assemble the matrix without needing more memory than that required for one of its columns.

You can assemble a compressed representation of the matrix without needing the original. 

Exactly. 

And what kind of difference does that make in the demands on memory and computational power for the type of simulations that these matrices are used for?

This matrix now doesn’t require almost any memory—it’s down to a few megabytes from an original size of petabytes. And what this means is that you can put it in a GPU and solve your simulations there. You can solve them much faster—in addition to being able to do them at all.

Practically, that means that modeling coil-body interactions at a 7 T magnet, which would’ve been impossible at one-millimeter resolution, with this method is possible. 

Yes. Of course, there are other methods that don’t use integral equations that can attempt to solve the problem, like some commercial packages, but it takes a lot of time—up to a few weeks or even a month to solve the problem. But with our method, it would be much faster. And, specifically for 7-Tesla imaging, you need the high resolution because that’s the point of ultra-high-field magnets—that their high signal-to-noise ratio lets you go to a finer voxel size. 

Another area you’re working in is electrical property mapping. Can you talk about how these advances in simulations relate to the mapping of electrical properties of tissue?

In the simulations we start from properties and the body model, and we find the fields. In electrical property mapping we start from the fields, and we want to find the electrical properties. It’s an inverse problem and it doesn’t have a direct solution. We have to look for potential solutions iteratively with different guesses of electrical properties until we find ones that match well enough the fields we measured in the scanner.

Can you talk about the potential benefits of being able to solve these inverse problems and the promise of electrical property mapping in biological tissues? 

An MRI can get you images with different contrasts like T1, T2, and proton density. In the same way, if you were able to measure electrical properties—permittivity and conductivity—they would essentially be new contrasts. Some studies have found a relationship between higher conductivity values and tumors or ischemia, so there is clinical interest in that area and an idea that the properties can help unveil new information about diseases. 

If we had access to electrical property maps, we could also design better coils. We could test different cases, adapt coils to different patients, and so on. Basically, electrical property maps would allow us some freedom to play around.

You have done research on global Maxwell tomography—a way of estimating electrical property maps—in simulations and in phantom experiments with a coil based on those simulations. What is your perspective on the feasibility of in vivo electrical-property measurements?

There are two components: one is to have accurate simulations from properties to magnetic field maps, and the other is to measure these magnetic maps in the scanner. We have to match the accuracy of both these methods: to make good enough simulations and good enough magnetic field maps; and to take good enough magnetic field measurements so that they can correspond to the simulated maps with very high accuracy. When we perform this, then it will be possible to do in vivo experiments. We’re still not there yet but slowly, I think, over time, we will get there.

What got you into this area of research in the first place?

When I was 15, I read a novel of titled Uncle Petros and Goldbach’s conjecture, about a mathematician, an academic, who attempts to solve Goldbach’s conjecture—an existing open problem, still unsolved, in number theory. I felt fascinated, and this brought me to mathematics. I wanted to become a researcher myself and to develop new methods to tackle existing problems. Then, I joined electrical engineering, and in my first course on electromagnetics had a really good professor, Theodoros Tsiboukis (who passed away a couple of years ago). He had a great way of teaching, and even though he was one of the leading electromagnetics scientists in Greece, he still had questions himself and still found the field fascinating. Electromagnetics is physics but Dr. Tsiboukis was teaching it from a mathematical perspective, so I immediately felt the connection.

Your interest in math and electromagnetics has by now taken you on quite a journey—both figurative and literal. Can you walk us through the various stations on your research path?

I was born in Athens and grew up there, so my first journey outside home was to Thessaloniki, where I did my undergrad. I spent five years there specializing in telecommunications and electromagnetics. And then I took my first international trip, to Moscow, to pursue my PhD at Skoltech. After two years, I went to MIT as a visiting student, where I spent a year. While at MIT, I had the opportunity to travel to New York for a few research visits and also to many conferences around the world. Then, unfortunately, Covid came, so I returned to Greece for about a year. And afterward, I came to New York, this time as a postdoc. 

Can you talk a little bit about what attracted you to Skoltech?

There were a few things. First, it was a new university that attracted a lot of big scientists, including from MIT, and there was a collaboration with MIT called “Skoltech-MIT Next Generation Program,” where you had two groups—one at each school—doing collaborative research. There were student and faculty exchanges and so on, so this was very interesting. Also my first PhD advisor, Anastasios Polimeridis, had done his PhD with the advisor I did my master’s thesis with, so I knew about him and that he was doing exciting research and really wanted to work with him. Also, Skoltech offered me a PhD program while other universities in Europe wanted me to do a master’s first. So I preferred to pursue a PhD at Skoltech and see where things go from there.

During the part of your PhD research performed at MIT, you worked with Jacob White and other scientists at the Research Laboratory of Electronics. That collaboration has continued, as is evident in the coauthored papers, including the one on the VSIE matrix compression method we talked about earlier. How did that collaboration originally develop and what has kept it going? 

Jacob is a really big figure. He was one of the professors traveling to Skoltech and supervising students there as part of a collaboration on electromagnetic simulation. He had another collaboration with Riccardo Lattanzi at NYU Langone and a few other professors, including Luca Daniel, on the electrical property problem. These problems are heavily related, so eventually the collaborations merged. And although the exchange between MIT and Skoltech does not exist anymore, the projects continue at NYU Langone and MIT. I remember when—even before I joined MIT as a visiting student—I was there briefly for a conference and was talking to Jacob about how to do the visiting studentship. Riccardo was there and he mentioned that he wanted to work with me on the electrical property reconstruction problem. He had seen my first work on tensor decomposition and was interested. I immediately understood that it could be an interesting collaboration, and that’s what ultimately led me to NYU Langone.

Do you see your postdoctoral research as part of a quest for answers, similar to the one in the novel that originally inspired you?

No, not anymore. Of course I’m still interested in electromagnetics and want to develop new methods, but now I’m more interested in applications. We have all these methods—it’s time to put them together in a software and make it open source so that people can actually use them. This is one of my projects: add more methods, make better software. I also feel that a postdoc is a training position and that it’s a time to learn, so I’m involved in a lot of projects that I see as learning opportunities to try to understand in depth what happens in MRI.

Earlier, I used to read about the big scientists like Einstein or Fourier, and thinking I wanted to be like them. But when I joined academia and met real professors, I thought that I wanted to be more like them—like Riccardo Lattanzi or Dan Sodickson, so they are the bigger inspiration now. But I also feel that the most important inspiration is my family, because my father, my brother, have always worked hard. I’m the first engineer, researcher, and PhD in my family. My grandpa was a lawyer, my brother is a lawyer, and my dad was a distinguished judge. My mom was a pharmacist, and she studied in Italy for a few years, so from her I took all this love of travel and international work. And from dad I took the work ethic.

Considering your success in navigating multiple collaborative research projects—some at significant geographic distances—do you have any insight or advice on how to manage simultaneous collaborations? 

For me, the most general advice is that if you really enjoy it, and if you excited when you go to work, then that’s a good sign and you should do it. If it feels like it’s bothering you, it’s tedious, then maybe not. The other advice is mostly time management: start early. I feel that there’s no need to stay late at night or overwork yourself. One good thing with my projects is that these simulations take time, so while I wait on one simulation to finish, I can work on something else.