TR&D Projects

The research activities of CAI²R combine three areas of novel and high-impact imaging technology development, which are grouped into individual Technology Research and Development (TR&D) projects. These core technology developments are complemented by Collaborative Projects (CPs) and Service Projects (SPs), which will focus on three general areas of high public health impact: cancer, musculoskeletal disease, and neurologic disease. We have also proposed a fourth TR&D project to begin in 2019.

TR&D Project 1

Reimagining the Future of Scanning: From snapshots to streaming, from imitating the eye to emulating the brain
Principal Investigators: Florian Knoll, PhD; Daniel K. Sodickson, MD, PhD

The goal of this project is to replace traditional complex and inefficient imaging protocols with simple, comprehensive acquisitions that also yield quantitative parameters sensitive to specific disease processes.

Our project team led the way in establishing rapid, continuous, comprehensive imaging methods , which are now available on a growing number of commercial MR scanners worldwide. We also helped to establish a new paradigm of streaming MR acquisition and reconstruction which is the subject of extensive study at numerous research centers around the world.

Having made substantial progress in development of rapid, continuous, comprehensive imaging methods, we are now working to further advance from the old imaging paradigm of carefully calibrated snapshots to a new paradigm of information-rich streaming.

Beginning in 2016, TR&D 1 investigators were among the first to demonstrate the use of deep learning methods for rapid, robust reconstruction of undersampled MR data acquisitions. So far, our preliminary work in this area has focused on the reconstruction of individual image frames, but the next logical step is to extend learned reconstructions to continuously-acquired comprehensive data streams. Here we see a powerful analogy to the way in which the human brain processes incoming streams of sensory data and converts them rapidly into actionable information. In this sense, our research plan represents a move for medical imaging technology from imitating the eye to emulating the brain.

We aim to enrich imaging data streams with new quantitative information, advance the extraction of actionable information from those data streams, and to feed the resulting information back into the design of our acquisition software and hardware and/or into the guidance of therapeutic procedures.

Recent developments put us in a position to question long-established assumptions about scanner design, originating from the classical imaging pipeline of human radiologists interpreting multiple series of qualitative images.

The new ability to question such assumptions—coupled with our core expertise in pulse-sequence design, parallel imaging, compressed sensing, model-based image reconstruction, and machine learning—creates an opportunity to reimagine the process of MR scanning.

Note: At the time of founding of our Center, in October 2014, this project was titled “Toward Rapid Continuous Comprehensive MR Imaging: New Methods, New Paradigms, and New Applications.”

TR&D Project 2

Unshackling the Scanners of the Future: From rigid control to flexible sensor-rich navigation
Principal Investigators: Christopher M. Collins, PhD; Ryan Brown, PhD

To date, we have designed and deployed numerous new tools, including innovative and influential RF coil design concepts, workhorse transmitter and detector coil arrays for high-performance MRI in a wide range of application areas, and new approaches to ensure RF safety.

When it comes to the topic of “RF control,” however, we have gravitated increasingly towards a new philosophy, which arose directly from our experience in this project, and which now motivates our work. The evolution of our thinking is illustrated in two Nature series papers published recently by our team. The papers exemplify a transition from rigid to flexible hardware designs, and from controlling to embracing inhomogeneities in the scanner environment:

  1. Our 2016 Nature Communications article described a new method for multiparametric imaging with heterogeneous radiofrequency fields, taking advantage of principles derived from MR Fingerprinting, which has also been a subject of extensive study by our TRD1 team. This work may be viewed as a highly flexible counterpoint to traditional rigid and workflow-intensive parallel RF transmission techniques.
    Rather than arranging for carefully-calibrated superpositions of RF fields from distinct transmit elements, we proposed to leverage distinct transmit field distributions from diverse transmit elements to ensure that no area of the target field of view goes unexcited, and to map the inhomogeneous field distributions along with other MRF parameters of interest. Siemens now offers a works-in-progress package using this approach.
  2. Our article on a flexible RF detector coil array stitched into a glove was featured on the cover of Nature Biomedical Engineering in August of 2018. The approach we used to design this wearable coil array was a significant departure from the way we designed a rigid many-element bore-lining array a few years earlier, though the goals of enabling robust continuous acquisitions remained the same. Rather than trying to arrange precise cancelation of couplings between meticulously arranged rigid elements, as in the bore-lining array, we used novel flexible elements which are immune to such couplings by virtue of their inherent high impedance.
    This development represents an evolution from rigid to flexible hardware, and from fighting subject-specific variations to embracing them.

Our research is now moving further in the direction of flexibility, aiming to navigate rather than control complex scanner environments.

We are developing new flexible RF coil arrays, and also exploring what information can be gleaned from new types of sensors. Just as self-driving cars, continuously probing their environment with LIDAR and other sensors, have captured the public imagination, an analogous model of “self-driving scanners,” outfitted with a sufficient number and variety of sensors to be able to navigate through substantial inhomogeneities and dynamic variations, warrants attention.

One such sensor is the PilotTone motion-detection system developed in TR&D 3. Other sensors may include ultrasound transducers and 3D cameras.

In order to make sense of the resulting sensor data stream, we plan to use the methods developed in TR&D 1 as well as denoising techniques developed in TR&D 4. We also aim at closing the loop by providing practical feedback on therapies such as MR-guided focused ultrasound.

Note: At the time of founding of our Center, in October 2014, this project was titled “Radiofrequency Field Interactions with Tissue: New Tools for RF Design, Safety, and Control.”

P.I. Profiles:
Christopher Collins, PhD
Ryan Brown, PhD

TR&D Project 3

Enriching the Data Stream: MR and PET in concert
Principal Investigators: Fernando Boada, PhD; Giuseppe Carlucci, PhD

Since the launch of our Center in 2014, we have made notable progress in joint reconstruction of MR and PET datasets, as well as in joint physiologic modeling. We have also built a state-of-the-art radiochemistry facility at our headquarters, with both a research lab (operated by TR&D 3 staff) and a clinical production lab (operated by PETNET industry partners) supplied by our cyclotron, and in close proximity to our PET-MR scanner.

We have produced a number of custom 11C and 18F radiotracers in response to the needs of our Collaborative and Service Projects. Our use of standard tracers to add value to clinical evaluation of epilepsy and brain tumors has resulted in changes to referral patterns among NYU Langone neurologists and neurosurgeons.

With these results and resources in hand, we aim at further increasing synergies between MR and PET.

In collaboration with Siemens and the TR&D 2 team, we have demonstrated the value of new sensors for motion correction, a technology we continue to develop and disseminate. Together with the TR&D 1 team, we have begun to bring the power of machine learning to bear on the problem of joint image reconstruction, leveraging both mutual and complementary information between MR and PET acquisitions. As we continue this work, we are also extending the use of our machine learning algorithms for reduction of partial voluming artifacts in PET images and for reconstruction of list mode data from low-specific-activity radiopharmaceuticals.

We are also developing tracers that specifically leverage simultaneous acquisition and joint reconstruction, and that enable validation of integrated physiological monitoring.

Our broader goal is to enrich the imaging data stream. If we think of medical imaging as extension of human sight, we may say that combining MR and PET acquisitions is analogous to using multiple senses, rather than a single sense, to perceive, understand, and guide decision-making about the world. In daily life, one is treated to a symphony of sensory inputs – auditory, visual, tactile, olfactory and gustatory. All of these inputs arrive simultaneously, and all contribute to one’s overall impression of any event. In a sense, then, like TR&D 2, our TR&D 3 project aims at a multisensory scanner, generating multiple complementary data streams to be interpreted with the aid of artificial intelligence.

Note: At the time of founding of our Center, in October 2014, this project was titled “Radiofrequency Field Interactions with Tissue: New Tools for RF Design, Safety, and Control.”

TR&D Project 4 (proposed)

Revealing Microstructure: Biophysical modeling and validation for discovery and clinical care
Principal Investigators: Dmitry Novikov, PhD; Els Fieremans, PhD

Starting in 2019, we propose to add a new TRD project to our BTRC portfolio.

With TR&D projects 1-3 focused on generating rich, quantitative data streams, a key question remains: What do these data streams really mean at the level of tissue function?

Answering such a question requires bridging spatial scales from the macroscopic (millimeter) dimensions of voxels to the mesoscopic (micrometer) dimensions of cells, where many disease processes originate. Our proposed TR&D 4 team (who led one of our most productive Collaborative Projects in the previous funding period, leading our Advisory Committee to recommend a new TR&D) has published seminal work elucidating the mesoscale origins of diffusion-weighted MR signal, and identifying degrees of freedom that can or cannot be derived reliably from averaging over cellular ensembles and coarse-graining from time-dependent diffusion.

In the proposed research plan for this project, the team will build upon that work, developing and sharing a range of practical acquisition, modeling, and validation tools, and translating them to the clinic. They will borrow methodology from fundamental physics to derive new functional forms that capture the key information content of MR signals.

Since this information content is always at risk of being obscured by noise, the team will also make use of new physics-based denoising methods they have recently introduced, based on random matrix theory as applied to complementary data acquisitions, such as distinct diffusion encodings. Complementary acquisitions, of course, are a cross-cutting theme of our Center, and we plan to implement these new denoising methods for multi-coil reconstruction, improved artifact correction, and motion detection in other TR&D projects.

In addition to our TR&D projects, a wide range of Collaborative Projects and Service Projects will benefit concretely from the tools developed in TR&D 4.

TR&D 4 will also add new scientific rigor to the research enterprise of our Biomedical Technology Resource Center, via a careful program of model validation—not only by testing key model assumptions against predicted signal functional behavior, but also by using various forms of microscopy to inspect microstructural features in test systems.

We propose to develop and validate tissue- and disease-specific models for neurodegeneration, muscular disorders, and cancer. Our models, moreover, will serve to provide a physical and biological groundwork for more data-driven models explored in other projects. In fact, we will be able to juxtapose the predictions biophysical models to those of learned models in TR&D 1, and perhaps even to build functional forms derived from our microstructural work into integrated reconstruction and modelling networks to constrain AI with a form of physical intuition.

At the same time, we plan to take advantage of our embedded collaborative and translational framework to take microstructure into the clinic—a process which has until now been challenged not only by lack of validation but also by long scan times, cumbersome workflow, and, critically, a lack of day-to-day interactions between basic microstructure researchers and clinicians.

We will explore several test cases in which microstructure may be used to inform diagnosis and therapy, from reducing overtreatment of prostate cancer to improving neurosurgical planning.

The proposed TR&D 4 will enrich our data streams with new microstructural information. This new project will leverage the strengths of our celebrated biophysics team, who have been at the forefront of a recent revolution in the understanding of tissue microstructure based on macroscopic MR measurements. We will aim to integrate microstructural mapping into the comprehensive image data acquisitions developed in other TR&D projects. Conversely, we will also use the developments from other projects to improve microstructural mapping, drawing on results from TR&D 1, 2, & 3 for efficient targeted acquisitions and optimized hardware and reconstruction algorithms, in order to go all the way from k-space data to maps of microstructural parameters.

Areas of application for the methods developed in the proposed TR&D 4 will span all of the key focal areas of Collaborative and Service Projects: neuroimaging, musculoskeletal imaging, and oncologic imaging.

P.I. Profiles:
Dmitry Novikov, PhD
Els Fieremans, PhD


Latest Updates

Philanthropic Support

We gratefully acknowledge generous support for radiology research at NYU Langone Health from:
• The Big George Foundation
• Raymond and Beverly Sackler
• Bernard and Irene Schwartz

Go to top