Categories
Lab Talk

Eros Montin on Cloud MR, Omics, and Contemplating Free Will

Eros Montin, research scientist who develops MRI simulation software and investigates radiomics, talks about building a virtual scanner, using orthogonal information, and what he thinks about while walking down the street.

Eros Montin, PhD, is a research scientist at NYU Langone Health and its Center for Advanced Imaging Innovation and Research. Dr. Montin specializes in the development of software tools for MRI research and is the head developer of an MRI simulation framework called Cloud MR, a project supported by the National Institute of Biomedical Imaging and Bioengineering and led by Riccardo Lattanzi, PhD, professor of radiology at NYU Langone. Dr Montin also conducts research on radiomic image analysis. He holds a doctorate in bioengineering from Politecnico di Milano and completed postdoctoral fellowships at Fondazione IRCCS Istituto Nazionale dei Tumori (a national cancer institute in Italy) and NYU Grossman School of Medicine. Our conversation has been edited for clarity and length.

You’re the lead developer of Cloud MR. Can you tell me a bit about what Cloud MR is?  

Cloud MR was conceived as a cloud-based framework to centralize code contributions in our department and integrate open-source projects, enabling the creation of imaging pipelines. The goal is to virtually evaluate key metrics like signal-to-noise ratio (SNR), specific absorption rate, and novel imaging setups, providing insights into the feasibility and quality of an idea of a new setup before physically implementing it. I’ve been working on Cloud MR with Riccardo Lattanzi since joining NYU Langone in 2018, and it’s a great idea that Riccardo had.  

In short, Cloud MR aims to standardize the knowledge behind every component of an MRI experiment into a well-structured and documented sets of APIs. And by sharing this framework, we can enable researchers even in locations without direct access to an MRI scanner to virtually test imaging setups through simulations.

So it’s like a virtual scanning environment? Whereas ordinarily, if you were designing a new pulse sequence, you would go to an MRI scanner with the phantom or subject of your choice, scan with the sequence you’re testing, then reconstruct the data, maybe in several ways. Of course all that requires access to a lot of resources, starting with an MRI machine, RF coils, scan time, etcetera.

Exactly. With Cloud MR, you can simulate and evaluate key aspects of an imaging experiment before ever stepping into the scanner room.

Is Cloud MR used currently at our research center?  

Yes, Cloud MR is used at our center. The framework consists of multiple applications, each developed as a standalone tool, but they can also be combined into pipelines for more complex workflows.

For example, we have the SNR module, MR Optimum [pronounced mister optimum], where users can upload k-space data, signal data, and noise, and obtain an evaluation of metrics like SNR, g-factor, coil sensitivities, and the noise covariance matrix. We’ve also developed an application called CAMRIE [short for Cloud Accessible MRI Emulator], which simulates MRI experiments by solving the Bloch equations and generating simulated k-space data from open-source body models. And these applications can be connected to form a pipeline.

Another example is the integration of pulse sequence design using mtrk [a pulse sequence design platform currently in development at our research center]. Once a sequence is created, it can be fed into CAMRIE to simulate its effects, and then linked to our cloud-based image reconstruction module, which utilizes BART, a software developed at UC Berkeley, to reconstruct the images.

So Cloud MR also accommodates some of the popular toolboxes already in use in the MR research community.

That’s exactly the idea—to consolidate knowledge and open-source tools into a unified framework that enables larger, more scalable projects. Right now, even something as fundamental as measuring SNR to evaluate a new idea involves significant manual effort. Researchers need to determine what data to generate, which scans to run, and how to configure the parameters.  

For researchers outside our center who are interested in using the Cloud MR framework, how can they access it?  

We have a website, cloudmrhub.com, where users will be able to register. Right now, registration is temporarily disabled as we finalize testing, but we plan to launch the full Cloud MR application center alongside an upcoming article on MR Optimum.

Once released, Cloud MR will offer three different ways—what we call mods—for researchers to interact with it.  Mod 1 is the web interface, where people can access Cloud MR applications directly through the website and the computations are performed in the cloud, primarily on Amazon Web Services (AWS). Mod 2 is an API integration, so people can request access to an API that replicates the full backend on their own infrastructure. And mod 3 is the ability to run Cloud MR tools entirely on a local machine using Python.

How has your understanding of what’s required to make such a framework a reality evolved over the years? When you were starting out, what were you expecting or imagining and what is your perspective on that now?

When I first started working with Riccardo, the original idea was to develop a MATLAB-based environment, more lika a desktop application. But I had some experience in website development and realized that it would be easy to build a web-based interface rather than rely on MATLAB. When I presented this idea to Riccardo, he suggested taking it a step further and moving everything to the cloud. But at the time we had no experience with cloud computing, and in 2018 it was still a relatively new technology for us. My expertise was in traditional server architectures, like LAMP [Linux, Apache, MySQL, PhP], where everything was self-contained—you had full control over the backend, the database, and a single application managed all processes. Shifting to a cloud-native architecture completely changed my perspective. Instead of a monolithic structure, we had to design a distributed, scalable system, where different services interact asynchronously. The biggest shift was realizing that not everything needed to run persistently: some tasks could be executed on demand, which eventually led us to serverless computing and the use of AWS Lambda to reduce costs and improve efficiency.

In developing Cloud MR, you have done a ton of coding and thinking about how to connect various applications and functions, how to organize workflows, handle data, what sort of interfaces make sense, how to reflect system status and deliver results to users. What have you learned through this process about imaging and MRI? Because this is a very different headspace than that of a scientist specializing in pulse sequences, image reconstruction, or image analysis.  

I consider the most challenging part of my job to be the harmonizing of non-adjacent knowledge. If you talk about MRI with an electrical engineer, a pulse sequence specialist, or a deep learning reconstruction expert, each operates with their own language, priorities, and assumptions.

I’ve learned that understanding intent is just as important as understanding the code itself. When I analyze a piece of software, I don’t just look at how it works—I try to grasp what the developer was aiming to achieve, because every expert has a different perspective on what is essential and what needs to be rigorous. I’ve come to appreciate how small details in an MRI workflow can have a cascading effect on the final output. Even minor choices—like how data is preprocessed or how a particular signal is handled—can, when they’re part of a larger pipeline, end up changing the output. So you need to think both specifically and generally. It’s a very multiscale vision I have now.

Apart from engineering software, you also conduct research in radiomics. Can you explain what radiomics is?  

Radiomics is like giving medical images a data-driven makeover—instead of just looking at them, we extract thousands of hidden patterns and features that the human eye can’t see. Imagine two patients with the same type of cancer: one responds well to treatment, while the other doesn’t. On a standard MRI scan, their tumors might look identical, but radiomics can discretize quantitative features—like texture, shape, and intensity variations—that can help explain why their outcomes differ.

It’s part of the broader omics family, where the challenge is having way more descriptive features than actual data. Think of it like studying two types of apples: both look red and round, but if you analyze their sugar content, acidity, and molecular composition, you might find that one is sweet, and the other is tart. Similarly, radiomics turns medical images into structured data, helping doctors predict treatment responses, track disease progression, and even uncover genetic patterns.

In other words, radiomics is a kind of mathematical view of information in the image. Can you talk about the relationship between such a view and what’s visible to the human eye?

Radiomics is essentially a mathematical interpretation of the information in a cohort of subjects, but it doesn’t replace human expertise—it complements it. There’s no radiomics without the radiologist, because the first step is to define a region of interest.

Last year, you were a guest editor at Frontiers in Radiology of a research topic called “radiomics and AI for clinical and translational medicine.” Can you talk about the difference between radiomics and AI?  

They’re deeply interconnected. Traditionally, radiomics relied on manual segmentation, where a human had to actively draw the region of interest on an image, and this limited the reach of radiomics. Now we have powerful segmentation models, which can automatically delineate anatomical structures, and open-source imaging datasets. This has made large-scale radiomics analysis feasible in ways that weren’t possible before. That was the core idea behind the editorial: radiomics has evolved, and now the question is how do we integrate all these advancements into clinical practice?

One of the most common uses of AI in the context of radiologic images is classification. How is radiomics different from that—even if its ultimate aims are to inform diagnosis?

The key difference between radiomics and deep learning classification is explainability. When you use a deep learning model, it’s often a black box. And when your life depends on something, you really want to know everything. If a model is determining whether a patient should undergo surgery, you need to understand why.

With radiomics, explanation is built into the analysis. If a radiomics model suggests that a tumor has a poor prognosis based on specific phenotypic patterns, you can actually see those features on the image, interpret them, and integrate them with the patient’s medical history—and that’s part of something called exposomics.

Exposomics?

From the word “exposure.” Your ethnicity, gender, age, diet, UV exposure, air quality, water intake, and countless other environmental factors all contribute to your overall health and disease risk. The ultimate goal is to perform a multivariate analysis that integrates all relevant data sources—not just imaging, but also radiomics, deep learning predictions, genomics, transcriptomics, proteomics, clinical reports, and patient history in a way that gives you meaningful information.

In an editorial in Frontiers in Radiology you and coauthors write that the long-term promise of radiomics is to enable personal precision medicine. It sounds like that’s what you’re describing.

Yes, that’s exactly it. My view is similar to what Daniel Sodickson often says—that we’ve always created medical images because a person was going to interpret them. But that might be changing.

In information theory, some types of information that are perpendicular add a new axis to your knowledge, and others that are correlated do not give that much new information. A good example is MRI principal component analysis: if you scan with 10 different sequences, you might find that one and a half of them provide almost all the information you need. But those other sequences are often necessary for the radiologist to confidently determine that a tumor is, in fact, a tumor.

It’s relatively easy to become 95 percent confident, but it’s very difficult to become 99.9 percent confident.  

And adding perpendicular information helps you get there.  

You did your academic training in Italy, and made a direct progression from undergraduate to graduate, doctoral, and postdoctoral training in biomedical engineering. But before any of that, you had worked for a couple of years for a logistics company. Tell me about that.

I studied economics and accounting in high school, but the only subjects I really enjoyed were math and physics—especially physics, which just felt intuitive to me. But at that time, I wasn’t particularly focused, and I didn’t have a clear sense of direction. My family didn’t have a history of higher education; we come from a humble background, so I didn’t have many examples of what my future could be beyond just going with the flow.

After high school, I ended up in a train-to-hire program in logistics, a collaboration between Bocconi University and my high school, and that led me to my first job.

One day, a colleague casually mentioned that he had developed a software calculator. I had always considered myself pretty good with computer hardware, but I realized at that moment that I had no idea how to write software. So I asked him, “How do you even start building a calculator?” And he said, “Visual Basic.” And I was like, wow.

This is around 2001.  

Yes, and luckily I was one of the first people in my area to get a fast internet connection because they were testing DSL there. That gave me a huge advantage: I could search online, download books, watch video tutorials, and access resources that my peers, who were still using dial-up, couldn’t. That’s what really opened up the world of programming to me.

I quit the logistics job and, at first enrolled in telecommunication engineering major at Politecnico di Milano. But after my first year, I realized that I wasn’t interested in the speed of cable—I was interested in the human body. I changed my major to biomedical engineering, bringing my software background with me. And from then on, I focused on AI, machine learning, big data, cognitive systems, and neurodegenerative diseases—anything that combined technology with human health. That has shaped everything I’ve worked on since.

How did you come to the realization that you’re interested in the human body? Physics relies on conventions of abstraction. An interest in the body seems much more, well, embodied.

Physics tells us that the body is governed by fundamental laws—gravity, electromagnetism, kinematics—but what has always fascinated me is that, despite these constraints, we move as we want. What is the force that allows me to move differently than if I were simply following deterministic physical laws?

If we lived in a completely deterministic system, every action should be 100 percent predictable, but it’s not. That contradiction between the predictability of physics and the complexity of human behavior was what drew me to the human body. Understanding why we move, think, and react the way we do is what makes studying biomedical engineering, AI, and neuroscience so fascinating to me.

Would it be an exaggeration to say that what has led you to biomedical engineering was, at least in part, a preoccupation with free will?  

I’ve never framed it that way, but yes, I think there’s always been an underlying curiosity about free will and the nature of existence there. Studying neuroimaging and the human body, which is the only direct representation of reality that we can perceive, is a good way to understand more. I’m still very intrigued by these kinds of questions.

In your current research and software engineering work, do you feel like you’re still mining some aspects of these interests?  

Absolutely, those interests are still very much a part of my thinking, but I’ve also started reflecting on new dimensions, particularly about what my future will look like, where I’ll be in ten years, what I’ll be working on. I think that’s something that comes with age, but the fundamental questions are still present. Sometimes when I’m walking, I try to understand exactly the forces acting on my bones—it’s a level of awareness that I love. It might sound cheesy coming from a biomedical engineer, but I find it really interesting how people change direction, react to eye contact, or how your behavior or emotional state affects another person. I think this is important to pay attention to and an interesting way of trying to understand reality.