Heriot Watt University
Professor and Head of School of Engineering and Physical Sciences
Title talk: Challenges in imaging and sensing in photon-starved regimes
How many photons per pixel do we need to construct an image? This apparently simple question is rather complicated to answer as it is dependent on what you want to use the image for. Computational imaging and sensing combines measurement and computational methods often when the measurement conditions are weak, few in number, or highly indirect (e.g. when the measurements are few in number, the information of interest is indirectly observed, or in challenging observation conditions). The recent surge in the development of sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low flux imaging and sensing.
In this talk, I will provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for a range of applications in extreme conditions. The applications considered ranging from the identification of radionuclide signatures from weak sources in the presence of a high radiation background to single-photon lidar 3D imaging of complex outdoor scenes in broad daylight from distances up to 320 metres.
Stephen McLaughlin the B.Sc. degree in Electronics and Electrical Engineering from the University of Glasgow in 1981 and the Ph.D. degree from the University of Edinburgh in 1990. From 1981 to 1986 he was a Development Engineer in industry.
In 1986 he joined the Dept. of Electronics and Electrical Engineering at the University of Edinburgh as a research fellow where he studied the performance of linear adaptive algorithms in high noise and nonstationary environments.
In 1988 he joined the academic staff at Edinburgh, and from 1991 until 2001 he held a Royal Society University Research Fellowship to study nonlinear signal processing techniques. In 2002 he was awarded a personal Chair in Electronic Communication Systems at the University of Edinburgh. In October 2011 he joined Heriot-Watt University as a Professor of Signal Processing and Head of the School of Engineering and Physical Sciences. His research interests is in statistical signal processing theory and its applications to biomedical, energy, imaging and communication systems. Prof McLaughlin is a Fellow of the Royal Academy of Engineering, of the Royal Society of Edinburgh, of the Institute of Engineering and Technology and of the IEEE and is a EURASIP Fellow.
Professor and Director of EPFL's Biomedical Imaging Group
We present a unifying functional framework for the implementation and training of deep neural networks (DNN) with free-form activation functions. To make the problem well posed, we constrain the shape of the trainable activations (neurons) by penalizing their second-order total-variations. We prove that the optimal activations are adaptive piecewise-linear splines, which allows us to recast the problem as a parametric optimization.
We then specify some corresponding trainable B-spline-based activation units. These modules can be inserted in deep neural architectures and optimized efficiently using standard tools. We provide experimental results that demonstrate the benefit of our approach.
Joint work with Pakshal Bohra, Joaquim Campos, Harshit Gupta, Shayan Aziznejad.
Michael Unser is professor and director of EPFL’s Biomedical Imaging Group, Lausanne, Switzerland. His primary area of investigation is biomedical image processing. He is internationally recognized for his research contributions to sampling theory, wavelets, the use of splines for image processing, stochastic processes, and computational bioimaging. He has published over 350 journal papers on those topics. He is the author with P. Tafti of the book “An introduction to sparse stochastic processes”, Cambridge University Press 2014.
From 1985 to 1997, he was with the Biomedical Engineering and Instrumentation Program, National Institutes of Health, Bethesda USA, conducting research on bioimaging.
Dr. Unser has served on the editorial board of most of the primary journals in his field including the IEEE Transactions on Medical Imaging (associate Editor-in-Chief 2003-2005), IEEE Trans. Image Processing, Proc. of IEEE, and SIAM J. of Imaging Sciences. He is the founding chair of the technical committee on Bio Imaging and Signal Processing (BISP) of the IEEE Signal Processing Society.
University of Chicago
Professor of Statistics and Computer Science
Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. In this talk, I describe the central prevailing themes of this emerging area and present a taxonomy that can be used to categorize different problems and reconstruction methods. We will also explore the lack of robustness of such methods to misspecification of the forward model: if at test time the forward model varies (even slightly) from the one the network was trained on, the network performance can degrade substantially. I will describe novel retraining procedures that adapt the network to reconstruct measurements from a perturbed forward model, even without full knowledge of the perturbation.
received the National Science Foundation CAREER Award in 2007, was a member of the DARPA Computer Science Study Group, and received an Air Force Office of Scientific Research Young Investigator Program award in 2010.
direct the Air Force Research Lab University Center of Excellence on Machine Learning, and serves on the Scientific Advisory Committee for the National Science Foundation’s Institute for Mathematical and
Statistical Innovation and the AI for Science committee for the US Department of Energy’s Advanced Scientific Computing Research program.
She completed her PhD in Electrical and Computer Engineering at Rice University in 2005 and was an Assistant then tenured Associate Professor of Electrical and Computer Engineering at Duke University
from 2005 to 2013. She was an Associate Professor of Electrical and Computer Engineering, Harvey D. Spangler Faculty Scholar, and Fellow of the Wisconsin Institutes for Discovery at the University of
Wisconsin-Madison from 2013 to 2018.
Robert W. Heath Jr.
North Carolina State University
In the last 20 years, MIMO wireless communication has gone from concept to commercial deployments in millions of devices. Two flavours of MIMO — massive and mmWave — are key components of 5G. In this talk, I will examine aspects of MIMO communication that may influence the next decade of wireless communications. I will start by highlighting, from a signal processing perspective, what was interesting about taking MIMO to higher carrier frequencies at mmWave. Then I will speculate about forthcoming directions for MIMO communication research. I will discuss the implications of going to mmWave about 100 GHz to terahertz frequencies, including implications on the channel assumptions and array architectures. I will make the case that it may be relevant to go back to signals from a circuits perspective, to make physically consistent MIMO models that work with large bandwidths. Finally, I will talk about how other advancements in circuits, antennas, and materials may change the models and assumptions that are used in MIMO signal processing.
Short Biography: Robert W. Heath Jr. received the Ph.D. in EE from Stanford University. He is a Distinguished Professor at North Carolina State University.