Data Fusion Using Independent Vector Analysis: Focus on Model Match, Interpretability, and Reproducibility
Tülay ADALI
University of Maryland Baltimore County
abstract
In many fields today, such as neuroscience, remote sensing, computational social science, and physical sciences, multiple sets of data are readily available. Matrix and tensor factorizations enable joint analysis, i.e., fusion of these multiple datasets such that they can fully interact and inform each other while also minimizing the assumptions placed on their inherent relationships. A key advantage of these methods is the direct interpretability of their results. This talk presents an overview of models based on independent component analysis (ICA), and its generalization to multiple datasets, independent vector analysis (IVA) with examples in fusion and analysis of neuroimaging data. Relationship of IVA to other methods such as multiset canonical correlation analysis (MCCA) is discussed, highlighting a number of new research directions. Importance of computational reproducibility is also addressed, with a focus on its relationship to model match and interpretability.
bio
Tülay ADALI is a Distinguished University Professor at the University of Maryland Baltimore County (UMBC), Baltimore, MD. Prof. Adali has been active within the IEEE. Her recent roles included, Chair, IEEE Brain Technical Community, 2023, Vice President for Technical Directions for the Signal Processing Society, 2019−2022. She is currently the editor-in-chief of the IEEE Signal Processing Magazine. She is a Fellow of the IEEE, AIMBE, and AAIA, a Fulbright Scholar, an IEEE SPS Distinguished Lecturer, and the UMBC Presidential Research Professor for 2024-2027. She is the recipient of a number of awards including the SPS Meritorious Service Award, Humboldt Research Award, an IEEE SPS Best Paper Award, and the NSF CAREER Award. Her current research interests are in statistical signal processing and machine learning, with applications to neuroimaging data analysis.
Katie Bouman
Dr. Katherine L. (Katie) Bouman is an assistant professor in the Computing and Mathematical Sciences, Electrical Engineering, and Astronomy Departments at the California Institute of Technology. Her work combines ideas from signal processing, computer vision, machine learning, and physics to find and exploit hidden signals for scientific discovery. Before joining Caltech, she was a postdoctoral fellow in the Harvard-Smithsonian Center for Astrophysics. She received her Ph.D. in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT in EECS, and her bachelor’s degree in Electrical Engineering from the University of Michigan. She is a Rosenberg Scholar, Heritage Medical Research Institute Investigator, recipient of the Royal Photographic Society Progress Medal, Electronic Imaging Scientist of the Year Award, University of Michigan Outstanding Recent Alumni Award, and co-recipient of the Breakthrough Prize in Fundamental Physics. As part of the Event Horizon Telescope Collaboration, she is co-lead of the Imaging Working Group and acted as coordinator for papers concerning the first imaging of the M87* and Sagittarius A* black holes.
YUEJIE chi
Dr. Yuejie Chi is the Sense of Wonder Group Endowed Professor of Electrical and Computer Engineering in AI Systems at Carnegie Mellon University, with courtesy appointments in the Machine Learning department and CyLab. She received her Ph.D. and M.A. from Princeton University, and B. Eng. (Hon.) from Tsinghua University, all in Electrical Engineering. Her research interests lie in the theoretical and algorithmic foundations of data science, signal processing, machine learning and inverse problems, with applications in sensing, imaging, decision making, and AI systems. Among others, Dr. Chi received the Presidential Early Career Award for Scientists and Engineers (PECASE), SIAM Activity Group on Imaging Science Best Paper Prize, IEEE Signal Processing Society Young Author Best Paper Award, and the inaugural IEEE Signal Processing Society Early Career Technical Achievement Award for contributions to high-dimensional structured signal processing. She is an IEEE Fellow (Class of 2023) for contributions to statistical signal processing with low-dimensional structures.
Computational Time-of-Flight 3D Imaging
Miguel Heredia Conde
Institute for High-Frequency and Communication Technology,
University of Wuppertal, Germany
abstract
Time-of-Flight (ToF) imaging estimates distances from the delay experienced by modulated light when traveling from the source to a target and from the latter to a detector array. Consequently, like any other active ranging technique, ToF imaging faces performance bottlenecks due to the limited emitted power. More specifically, ToF imaging employs dense illumination over a wide field of view, as opposed to lidar, which relies on pencil beams. This severely reduces the power density and constrains the attainable accuracy and range. At the detector end, high dynamic range and low quantization noise are required. Additionally, model mismatches regarding the sensing process and the scene further degrade the accuracy of the depth estimate. Fortunately, the correlation sampling scheme typically implemented by ToF pixels offers outstanding flexibility and multiple degrees of freedom that can be leveraged from a computational sensing perspective.
In this talk, we will unveil how tailored computational imaging methods can help in overcoming fundamental limitations in ToF 3D imaging. More specifically, we will focus on the high data rate of ToF cameras, which prevents high image resolution; their high power consumption, as compared to conventional cameras; the limited range at which depth can be accurately estimated; and measurement distortions produced by the harmonic content in the modulation/demodulation waveforms and the multi-path interference. In-pixel adaptive quantization strategies implementing noise-shaping will be presented that enable accurate depth imaging from streams of one-bit data, hence avoiding exorbitant data rates for high-resolution arrays. In the pursue of power consumption reduction, we will present a novel approach that enables passive ToF imaging exploiting opportunity sources of modulated light, such as LiFi or Visible Light Communications (VLC) modules, in the art of an optical bistatic radar. Furthermore, gains in operative range and resolution can be obtained by means of ultrashort pulse shaping combined with low-density coded demodulation, in the art of compressive sensing. At last, a single-shot Fourier-sampling camera is introduced that attains minimal harmonic distortion by inducing custom resonant effects in the ToF pixels. Hardware prototypes and initial evaluation results will also be shown that prove the potential of these computational 3D imaging approaches to bypass the aforementioned limitations.
bio
Miguel Heredia Conde received the Dr.Eng. degree in sensor signal processing from the University of Siegen, Siegen, Germany, in 2016 and the Habilitation degree from the same university in 2022. In 2013, he joined the Center for Sensor Systems (ZESS), University of Siegen. Since then, he has also been a member of the DFG Research Training Group Graduiertenkolleg (GRK) 1564 “Imaging New Modalities.” Since 2016, he has been the Leader of the research group “Compressive Sensing for the Photonic Mixer Device” at ZESS, and since 2020, he has also been the General Manager of the H2020-Marie Skłodowska-Curie Innovative Training Network (MSCA-ITN) MENELAOSNT. In 2023 he joined the Institute for High-Frequency and Communication Technology, University of Wuppertal, where he is the Head of the research group on "Computational 3D Imaging". His current research interests include time-of-flight imaging systems, such as those based on the photonic mixer device (PMD), Terahertz imaging, compressive sensing, computational imaging, and unconventional sensing.
He has been responsible for two lectures with focus on Compressive Sensing (CS) at the University of Siegen from 2017 to 2023 and another two at the University of Wuppertal from 2024 with focus on CS and optical imaging and sensing, respectively. Dr. Heredia Conde is a member of the IEEE Signal Processing Society (SPS) and the IEEE Standards Association (SA) and a regular reviewer of top-level conferences (ICASSP, etc.), multiple Elsevier and IEEE Transactions, Letters, and Journals, and of the DFG (German Research Foundation). He has been a visiting researcher at CiTIUS (Area of Artificial Vision), University of Santiago de Compostela, at the Faculty of Physics (Division of Information Optics), University of Warsaw, and at the Department of Electrical and Electronic Engineering, Imperial College London. In 2020 he has been a visiting lecturer at the Department of Applied Mathematics, University of Vigo. He has also been an invited speaker at multiple conferences and seminars. Dr. Heredia Conde is the Chair of the P3382 Performance Metrics for Magnetic Resonance Image (MRI) Reconstruction Working Group of the IEEE Synthetic Aperture Standards Committee.
Dr. Heredia Conde was one of the recipients of the 2006 Academic Excellence Prizes, awarded by the Government of Galicia, Spain. In 2017, he received the University of Siegen Prize for International Young Academics. In 2020 his collaborative work with Prof. Bhandari (ICL) was awarded the Best Paper Award at ICCP. In 2024 his group’s work on passive ToF imaging was awarded the Best Demo Award at CoSeRa.
Image quality in low-field MRI
Sairam Geethanath
Accessible Magnetic Resonance Laboratory, Division of Cancer Imaging Research,
Dept. of Radiology and Radiological Sciences, Johns Hopkins University School of Medicine
abstract
The resurgence of portable, low-field MRI systems aims to improve accessibility, but these systems face image quality challenges compared to clinical scanners. They are more portable, cost-effective, and safer due to lower RF power deposition, making them easier to maintain without cooling agents like liquid helium. However, temperature, lower magnetic field strength, and inhomogeneity can lead to lower signal-to-noise ratios (SNR) and more significant geometric distortions. Understanding these variations is essential for guiding future in vivo studies and system deployments. In this presentation, I will discuss the repeatability of image quality metrics—specifically SNR and geometrical distortion—at a magnetic field strength of 0.05 T over 10 days, with three sessions daily, and compare this data with a 3T MRI dataset. Next, we will examine the impact of electromagnetic interference (EMI) on image quality at such magnetic field strengths. Finally, I will provide resources related to low-field MR image quality, including datasets and tools, for the broader community to use and contribute.
bio
Dr. Geethanath is an Assistant Professor at the Johns Hopkins University School of Medicine with a background in Magnetic Resonance (MR) technology development and clinical translation. His research interests include developing novel image acquisition and reconstruction methods. These are aimed to deliver accessible MRI solutions to the world's underserved populations that do not have access to MRI. To this end, he was part of a team of five investigators in India to build her first indigenous MRI. Currently, he and his team are developing methods related to autonomous MRI, new methods of spatial encoding and accelerated quantitative imaging to build hardware cognizant methods. These developments are focused on quantifying and improving image quality in accessible MR systems such as low-field MRI. In the past, he has focused on accelerated MRI acquisition and reconstruction methods to overcome challenges related to spatio-temporal resolutions. He has trained at Imperial College London, University of Texas at Southwestern Medical Center and University of Texas at Arlington.
Anne gelb
Dr. Anne Gelb is the John G. Kemeny Parents Professor of Mathematics at Dartmouth College. Her work focuses on high order methods for signal and image restoration, classification, and change detection for real and complex signals using temporal sequences of collected data. There are a wide variety of applications for her research including speech recognition, medical monitoring, credit card fraud detection, automated target recognition, and video surveillance. A common assumption made in these applications is that the underlying signal or image is sparse in some domain. While detecting such changes from direct data (e.g. images already formed) has been well studied, Professor Gelb’s focus is on applications such as magnetic resonance imaging (MRI), ultrasound, and synthetic aperture radar (SAR), where the temporal sequence of data are acquired indirectly. In particular, Professor Gelb develops algorithms that retain critical information for identification, such as edges, that is stored in the indirect data. Professor Gelb is currently investigating how to use these techniques in a Bayesian setting so that the uncertainty of the solutions may also be quantified, and is interested in applying these techniques for purposes of sensing, modeling, and data assimilation for sea ice prediction. Her research is funded in part by the Air Force Office of Scientific Research, the Office of Naval Research, the National Science Foundation, and the National Institutes of Health, and she regularly collaborates with scientists at the Wright-Patterson Air Force Research Lab and the Cold Regions Research and Engineering Laboratory (CRREL).
development activities in the ieee 802 Lan/man standards committee
james gilb
Chair, IEEE 802 LAN/MAN Standards Committee (LMSC)
abstract
The IEEE 802 LMSC is a leading consensus-based open standards development committee for networking standards that are used by industry globally. It produces standards for networking devices, including wired and wireless local area networks (“LANs” and “WLANs”), wireless specialty networks (“WSNs”), wireless metropolitan area networks (“Wireless MANs”), and wireless regional area networks (“WRANs”). Technologies produced by implementers of our standards are a critical element for all networked applications today. Essentially every packet sent on the Internet is conveyed by IEEE 802 technology at some point in its trip.
The IEEE 802 has dozens of active projects developing standards focused on improving networking, wired and wireless, at rates from 100 kb/s to 1.2 Tb/s. This talk will provide an overview of the some of the current work in IEEE 802 including:
• Advances in Time Sensitive Networking
• Next speeds in Ethernet – into Tb/s
• Single pair Ethernet for automotive, industrial and aerospace
• Looking forward to the next generation of WLAN: Wi-Fi 7
• WLAN sensing, ambient power operation, AI/ML and ranging
• Ultra-Wide Band (UWB) for low-latency communications, sensing and ultra-low energy implementations
• UWB in access points and for body area networks (BAN)
• Terahertz communications
• Smart Cities, Distributed Energy Resources (DER).
• Optical wireless communications
bio
James P. K. Gilb has over 30 years experience in a variety of areas including; radar absorbing materials, RFIC design, radio systems architecture, and MAC protocols. He joined General Atomics Aeronautical Systems (GA-ASI) in 2016 as Principal RF Systems Engineer working on sensors and communication systems. He is currently the IEEE 802 LAN/MAN Standards Committee (LMSC) Chair. The IEEE 802 LMSC is the leading consensus-based open standards development committee for networking standards that are used by industry globally. He has nine issued patents and many papers in refereed journals. He has been the technical editor of ten standards, and is the author of three books. He received a BSEE, MSEE and Ph.D. degrees in Electrical Engineering, from Arizona State University.
Snapshot tomographic imaging and volumetric video with multi-camera array microscopes
Roarke Horstmeyer
Duke University
abstract
This talk describes a new type of computational microscope that uses an array of compact cameras to form high-resolution, high-speed synchronized 3D videos. We utilize an architecture termed a “multi-camera array microscope” (MCAM), which contains up to 96 synchronized digital image sensors and associated lenses to capture gigapixel-scale snapshot measurements. We have developed several unique optical configurations that use the MCAM along with a primary lens and/or mirror to capture many unique angular perspectives of objects of interest in a single snapshot. Co-designed 3D surface and tomographic volume formation software then allows us to produce high-resolution 3D results. When operated at video rates (or faster), these new MCAM platforms allow us to create fully 3D videos of highly dynamic specimens across large areas at near-cellular resolution, such as freely moving model organisms and tissue surfaces during surgical operations, with minimal motion artifacts. Future efforts aim to jointly improve volumetric resolution via synthetic aperture techniques.
bio
Roarke Horstmeyer is an assistant professor of Biomedical Engineering and Electrical and Computer Engineering at Duke University. He is also the Scientific Director at Ramona Optics. He develops microscopes, cameras and computer algorithms for a wide range of applications, from forming large-area, high-resolution 3D videos of freely moving organisms to detecting blood flow and brain activity deep within tissue. Dr. Horstmeyer’s lab currently performs research within the fields of ptychography, high-content microscopic imaging, physics informed machine learning algorithms, and biophotonic measurement systems. Before joining Duke in 2018, Dr. Horstmeyer was a visiting professor at the University of Erlangen in Germany and an Einstein International Postdoctoral Fellow at Charité Medical School in Berlin. Prior to his time in Germany, Dr. Horstmeyer earned a PhD from Caltech’s Electrical Engineering department (2016), an MS from the MIT Media Lab (2011), and bachelor’s degrees in Physics and Japanese from Duke in 2006.
Computational Imaging across the Scales
Ivo Ihrke
University of Siegen, Germany
abstract
The Computational Imaging methodology and community are bridging gaps across application domains which enables interesting connections between seemingly disparate problems to be drawn.
Traditionally, imaging modalities have been developed by separate communities, often reinventing similar concepts, but also developing different ideas when faced with common problems. The underlying physical concepts and mathematical models are, however, sufficiently
similar to enable a transfer of insights from one domain to the next. In my talk I will therefore discuss imaging modalities at different scales, from synthetic aperture radar through optical imaging to electron microscopy and point out similarities and differences in the underlying acquisition problems. In doing this, I aim at reinforcing the spirit of our community that essentially, most imaging problems are similar and can be understood and dealt with on a common theoretical basis. This, in turn, enables us to serve as "translators" between different application domains and aids knowledge transfer and homogenization.
bio
Ivo Ihrke is professor of Computational Sensing at University of
Siegen, Germany, a member of the university's ZESS (center for sensor
systems) as well as affiliated with the Fraunhofer Institute for High
Frequency Physics and Radar Techniques. Prior to joining Siegen, he
was a staff scientist at the Carl Zeiss research department, which he
joined on-leave from Inria Bordeaux Sud-Ouest, where he was a
permanent researcher. At Inria he lead the research project
"Generalized Image Acquisition and Analysis" which was supported by an Emmy-Noether fellowship of the German Research Foundation (DFG). Prior to that he was heading a research group within the Cluster of Excellence "Multimodal Computing and Interaction" at Saarland
University. He was an Associate Senior Researcher at the MPI
Informatik, and associated with the Max-Planck Center for Visual
Computing and Communications. Before joining Saarland University he
was a postdoctoral research fellow at the University of British
Columbia, Vancouver, Canada, supported by the Alexander von
Humboldt-Foundation. He received a MS degree in Scientific Computing
from the Royal Institute of Technology (KTH), Stockholm, Sweden and a
PhD (summa cum laude) in Computer Science from Saarland University.
Ivo has been the organizer of several Computational Imaging events, such as the CVPR Workshop PROCAMS in 2012, the GCPR Workshop on Imaging New Modalities in 2013, a Dagstuhl seminar on Computational Imaging in 2015, the ZEISS Symposium on Computational Imaging in 2016 and a Heraeus Seminar on Computational Optical Microscopy in 2026. He has been program chair of the German Conference on Pattern Recognition (GCPR) in 2022 and will be general chair of Vision, Modeling and Visualization (VMV) in 2026.
He holds 20+ patents and is a cofounder of K|Lens GmbH, a company
specializing in plenoptic imaging for industrial quality control and
artificial intelligence. He is interested in all aspects of Computational
Imaging, including theory, mathematical modeling, algorithm design and their efficient implementation, as well as hardware concepts and their experimental realization and characterization.
Inverse Rendering from Propagating Light
david lindell
University of Toronto, Canada
abstract
Inverse rendering techniques aim to recover scene properties—such as geometry, reflectance, and lighting—from images. However, conventional images only capture steady-state light transport effects, which are far less informative than observations of time-resolved light transport. In this talk, I describe a new class of inverse rendering techniques based on ultrafast (picosecond-resolution) measurements of propagating light, captured using single-photon detectors. These techniques enable rendering videos of complex, time-resolved light transport effects from novel viewpoints (e.g., multiple scattering, refraction, and diffraction), recovery of material properties, and 3D reconstruction from multiply scattered light. Finally, I discuss a first-of-its-kind approach for visualizing asynchronously propagating light pulses emitted from multiple laser sources, which we leverage to passively reconstruct millimeter-scale 3D geometry over room-scale distances.
bio
David Lindell is an Assistant Professor in the Department of Computer Science at the University of Toronto. Prior to joining the University of Toronto, he received his Ph.D. from Stanford University. His work combines emerging sensors, machine learning, and physics-based models to enable new capabilities in visual computing. He is a recipient of the ACM SIGGRAPH Outstanding Dissertation Award Honorable Mention, a Google Research Scholar award, a Sony Faculty Innovation Award, and the 2023 Marr Prize.
Synthetic Aperture and Inverse Multiple Scattering in Optical Diffraction Tomography
LEI TIAN
Computational Imaging Systems Lab,
Boston University
abstract
In this talk, I will present our advancements in leveraging the synthetic aperture principle and multiple scattering information for high-resolution 3D reconstruction in optical diffraction tomography. I will discuss our methods in both transmission and reflection geometries, highlighting key algorithmic developments, experimental validation, and applications in biomedical and industrial imaging.
bio
Lei Tian is an Associate Professor in the Department of Electrical and Computer Engineering and Biomedical Engineering, and directs the Computational Imaging Systems Lab at Boston University (BU). He received his Ph.D. from MIT and was a postdoctoral associate at UC Berkeley. His research focuses on developing computational imaging techniques for biomedical and semiconductor applications. Dr. Tian is a Scialog Fellow in Advancing BioImaging and was awarded the NSF CAREER Award and the BU College of Engineering Early Career Excellence in Research Award.