Imaging and Sensing Session
2:00 pm to 5:00 pm, February 23 on Zoom
Imaging and sensing technologies are crucial for making sense of the physical world around us. This will be an interdisciplinary session focused on innovative computational as well as hardware approaches to forming images and obtaining information from sensors. The session aims to host presentations about a wide variety of imaging modalities such as electromagnetic, optical, acoustic imaging, and so on. Although these target a wide variety of applications, this session aims to bring out the several commonalities they have in terms of the physics of imaging, design of imaging instruments and the design of algorithms for forming images. Additionally, this session will include topics such as sensor design, sensor networks and IoT. The Imaging and Sensing session will have a keynote speech by Prof. Vivek Goyal a prominent researcher in the area.
“One Click at a Time: Photon- and Electron-Level Modeling in Computational Imaging”
Time: 2:00 pm to 3:00 pm, February 23
Talk Abstract: Detectors that are capable of sensing a single photon are no longer rare. They are used for 3D imaging on the iPad Pro and in many autonomous vehicles and mobile devices. Similarly, direct electron detection is used in particle-beam microscopy. This talk will focus on how modeling at the level of individual detected particles leads to unconventional processing methods with surprising capabilities.
In single-photon lidar, when detector dead times are insignificant, Poisson process models can be used directly and lead to accurate depth and reflectivity imaging with as few as one detected photon per pixel. Under high ambient light or with high dynamic range of intensity, dead times are significant and create statistical dependencies that invalidate a Poisson process model. In this case, Markov chain modeling can mitigate the bias of conventional methods. In focused ion beam microscopy, modeling at the level of individual incident particles and emitted secondary electrons inspires a new way to acquire and interpret the data. In both families of applications, principled statistical models and estimation methods lead to significant imaging improvements.
Biography: Vivek Goyal received his doctoral degree in electrical engineering from the University of California, Berkeley. He was a Member of Technical Staff at Bell Laboratories, a Senior Research Engineer for Digital Fountain, and the Esther and Harold E. Edgerton Associate Professor of Electrical Engineering at MIT. He was an adviser to 3dim Tech, winner of the 2013 MIT $100K Entrepreneurship Competition Launch Contest Grand Prize, and consequently with Google/Alphabet Nest Labs 2014-2016. He is now a Professor and Associate Chair of Doctoral Programs in the Department of Electrical and Computer Engineering at Boston University. Dr. Goyal is a Fellow of the IEEE and of Optica (formerly OSA), and he and his students have been awarded ten IEEE paper awards and eight thesis awards. He is a co-author of Foundations of Signal Processing (Cambridge University Press, 2014).
“Designing Personal Health Sensing Devices with Electrical Impedance Tomography”
Time: 3:00 pm to 3:30 pm, February 23
Talk Abstract: Electrical Impedance Tomography (EIT) is an imaging technique that measures conductivity, permittivity, and impedance of a subject. It works by attaching electrodes to the surface of the subject, and then using the electrodes to either inject current or measure the resulting voltages. Interpolating the raw signals then results in an image of the subject’s internal conductivity. In this talk, I will introduce EIT-kit, an electrical impedance tomography toolkit for designing and fabricating health and motion sensing devices. EIT-kit supports users across different stages of personal EIT device development. EIT-kit contains (1) an extension to a 3D editor for personalizing the form factor of electrode arrays and electrode distribution, (2) a customized EIT sensing motherboard for performing the measurements, (3) a microcontroller library that automates signal calibration and facilitates data collection, and (4) an image reconstruction library for mobile devices for interpolating and visualizing the measured data. Together, these EIT-kit components allow for applications that require 2- or 4-terminal setups, up to 64 electrodes, and single or multiple (up to four) electrode arrays simultaneously.
Biography: Junyi Zhu is a PhD candidate from MIT CSAIL, HCIE Group, working with professor Stefanie Mueller. He works at the intersection of personal fabrication and health sensing. His recent research focuses on creating personal health sensing devices with rapid function prototyping techniques and novel sensing technologies. Prior to MIT, he received his Bachelor’s degree from University of Washington. He is a 2017 Seneff-Zue Fellow and a 2021 Thomas G. Stockham, Jr (1955) and Bernard (Ben) Gold Fellow.
Janet E. Sorrells
“Computational photon counting for fast fluorescence lifetime imaging microscopy using single- and multi-photon peak event detection”
Time: 3:30 pm – 3:50 pm, February 23
Talk Abstract: Multiphoton fluorescence lifetime imaging microscopy (FLIM) is often limited by slow acquisition due to the low bandwidth of photon-counting and time-tagging analog electronics. Here, we present a solution for faster imaging and lower dead time by directly digitizing the amplified detector output and computationally determining photon counts via GPU-accelerated processing using our custom Single- and multi-photon PEak Event Detection (SPEED) algorithm. The SPEED algorithm maintains the fast acquisition of direct pulse sampling but recovers single-photon resolution using thresholded local maxima detection to temporally localize photon counts in the directly digitized data. Furthermore, using a hybrid photodetector (HPD) provides the capabilities to resolve peaks resultant from multiple simultaneous photons arriving at the detector, greatly increasing the maximum acceptable photon rate of the system. SPEED enables faster, more efficient imaging, opening up new opportunities for using FLIM to characterize rapid dynamics and enable high-throughput and high-dynamic range imaging.
Biography: Janet Sorrells is a 4th year student in the Bioengineering PhD program at the University of Illinois at Urbana-Champaign. Her research focuses on using novel methods in nonlinear optical microscopy for label-free imaging of live cells and organisms.
“Phase retrieval problems in optical machine learning and imaging”
Time: 3:50 pm to 4:10 pm, February 23
Talk Abstract: We encounter phase retrieval problems in a variety of applications such as optical machine learning and scientific imaging. In these problems we can only measure the magnitude of complex-valued measurements and we lose important phase information. For example, in optical computing, a known input signal vector is shined through a scattering medium which acts as a random matrix, and we can only measure the magnitude of this matrix-vector product. Recovering the measurement phase would give us the matrix-vector product and enable rapid machine learning computations. On the other hand, in image reconstruction, the input signal is unknown and we aim to recover it from experimentally measured magnitude measurements. This is further challenging because the optical scattering medium may be unknown. In this talk we show how we can recover the measurement phase to enable optically computed machine learning. Next, with the recovered measurement phase, we can rapidly calibrate the scattering medium. Lastly, we reconstruct images by developing a method to account for errors in the calibrated medium. We demonstrate the performance of our methods by performing experiments on real optical hardware.
Biography: Sidharth gupta bio: Sidharth Gupta is pursuing a PhD in electrical and computer engineering at the University of Illinois at Urbana-Champaign, USA, where he works on signal processing and machine learning approaches for image reconstruction and phase retrieval problems. He received BA and MEng degrees from the University of Cambridge, UK. Between Cambridge and Illinois, he worked at Samsung Electronics in Seoul, South Korea. During the PhD he has been a Research Intern at IBM and at Microsoft.
“An Adversarial Learning Based Approach for 2D Unknown View Tomography”
Time: 4:10 pm to 4:30 pm, February 23
Talk Abstract: The goal of 2D tomography is to recover an image given its projections from various views. It is often presumed that viewing angles associated with the projections are known in advance. Under certain situations, however, these angles are known only approximately or are completely unknown. It becomes more challenging to reconstruct the image from a collection of random projections with unknown viewing directions. We propose an adversarial learning based approach to recover the image and the viewing angle distribution by matching the empirical distribution of the measurements with the generated data. Fitting the distributions is achieved through solving a min-max game between a generator and a critic based on Wasserstein generative adversarial network structure. To accommodate the update of the viewing angle distribution through gradient backpropagation, we approximate the loss using the Gumbel-Softmax reparameterization of samples from discrete distributions. Our theoretical analysis verifies the unique recovery of the image and the projection distribution up to a rotation and reflection upon convergence. Our extensive numerical experiments showcase the potential of our method to accurately recover the image and the viewing angle distribution under noise contamination.
Biography: Mona Zehni is a PhD student at CSL and ECE department in UIUC, working with Prof. Zhizhen Zhao and Prof. Minh Do. Her research focus spans the areas of computational imaging, machine learning and computer vision.
“Neural Fields for Dynamic Imaging”
Time: 4:30 pm to 4:50 pm, February 23
Talk Abstract: Dynamic imaging is essential for analyzing various biological systems but faces two challenges: data incompleteness and computational burden. High frame rates require severe undersampling, leading to data incompleteness. Multiple images may be compatible with the data, requiring regularization to ensure the uniqueness in reconstruction. Computational and memory requirements are particularly burdensome for three-dimensional dynamic imaging requiring high resolution. Exploiting redundancies in the object’s spatiotemporal features is key to addressing both challenges. This contribution investigates neural fields to model the sought-after, dynamic object. Neural fields are a particular class of neural networks that represent the dynamic object as a continuous function, avoiding the burden of storing a full resolution image at each time frame. Neural field representation reduces image reconstruction to estimating network parameters via a nonlinear optimization problem. The neural field can then be evaluated at arbitrary locations in space and time, allowing for high-resolution rendering of the object. Key advantages of the proposed approach are that neural fields automatically learn redundancies in the object to regularize the reconstruction and significantly reduce memory requirements. The feasibility of the proposed framework is illustrated with an application to dynamic image reconstruction from severely undersampled circular Radon transform data.
Biography: Luke Lozenski (Student Member, SPIE) received his B.S. and M.S. degree in Systems Engineering from Washington University in St. Louis, MO, USA in 2020. He is currently pursuing a Ph.D. degree in Systems Science and Mathematics at Washington University in St. Louis.His research focuses on solving computational imaging problems with complex physics-based models including multispectral and dynamic photoacoustic tomography. This includes the development of scientific machine learning frameworks for solving inverse and image reconstruction problems. He is advised by Prof. Umberto Villa
For more information, please contact the session chair, Varun Kelkar.