18 – 22 January 2021
Virtual EUSIPCO 2020
The organizing committee of EUSIPCO 2020 has decided that EUSIPCO 2020 will be a full virtual conferencedue to the on-going COVID-19 pandemic and resulting travel restrictions.
We are excited to be able to provide a virtual venue for EUSIPCO 2020 and hope you will join us and learn about the latest developments in research and technology for signal processing.
VIRTUAL FORMAT
- The virtual EUSIPCO 2020 will have a “live”-feel. All presentations, presented via a 15-minute video, are given at pre-defined timeslots followed by a 5-minute live Q&A session in the conference virtual portal.
- Presentation materials will be made available to conference attendees (i.e. those registered for the conference) for two month starting on January 18, 2021.
- Poster presentations will be given live. There will be a virtual poster area with break-out rooms where you can visit the individual posters and talk to the poster presenters. In addition, pre-recorded 15-minute videos will be made available offline.
- The program will be made available soon. All sessions will follow Central European Time (CET).
PLENARY SPEAKERS
More background information on the speakers, click here
How many photons per pixel do we need to construct an image? This apparently simple question is rather complicated to answer as it is dependent on what you want to use the image for. Computational imaging and sensing combines measurement and computational methods often when the measurement conditions are weak, few in number, or highly indirect (e.g. when the measurements are few in number, the information of interest is indirectly observed, or in challenging observation conditions). The recent surge in the development of sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low flux imaging and sensing.
In this talk, I will provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for a range of applications in extreme conditions. The applications considered ranging from the identification of radionuclide signatures from weak sources in the presence of a high radiation background to single-photon lidar 3D imaging of complex outdoor scenes in broad daylight from distances up to 320 metres.
We present a unifying functional framework for the implementation and training of deep neural networks (DNN) with free-form activation functions. To make the problem well posed, we constrain the shape of the trainable activations (neurons) by penalizing their second-order total-variations. We prove that the optimal activations are adaptive piecewise-linear splines, which allows us to recast the problem as a parametric optimization.
We then specify some corresponding trainable B-spline-based activation units. These modules can be inserted in deep neural architectures and optimized efficiently using standard tools. We provide experimental results that demonstrate the benefit of our approach.
Joint work with Pakshal Bohra, Joaquim Campos, Harshit Gupta, Shayan Aziznejad
Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. In this talk, I describe the central prevailing themes of this emerging area and present a taxonomy that can be used to categorize different problems and reconstruction methods. We will also explore the lack of robustness of such methods to misspecification of the forward model: if at test time the forward model varies (even slightly) from the one the network was trained on, the network performance can degrade substantially. I will describe novel retraining procedures that adapt the network to reconstruct measurements from a perturbed forward model, even without full knowledge of the perturbation.
In the last 20 years, MIMO wireless communication has gone from concept to commercial deployments in millions of devices. Two flavours of MIMO -- massive and mmWave -- are key components of 5G. In this talk, I will examine aspects of MIMO communication that may influence the next decade of wireless communications. I will start by highlighting, from a signal processing perspective, what was interesting about taking MIMO to higher carrier frequencies at mmWave. Then I will speculate about forthcoming directions for MIMO communication research. I will discuss the implications of going to mmWave about 100 GHz to terahertz frequencies, including implications on the channel assumptions and array architectures. I will make the case that it may be relevant to go back to signals from a circuits perspective, to make physically consistent MIMO models that work with large bandwidths. Finally, I will talk about how other advancements in circuits, antennas, and materials may change the models and assumptions that are used in MIMO signal processing.