Agenda

PhD Thesis Defence

Image formation for future radio telescopes

Shahrzad Naghibzadeh

Fundamental scientific questions such as how the first stars were formed or how the universe came into existence and evolved to its present state drive us to observe weak radio signals impinging on the earth from the early days of the universe. During the last century, radio astronomy has been vastly advancing. Important discoveries on the formation of various celestial objects such as pulsars, neutron stars, black holes, radio galaxies and quasars are the result of radio astronomical observations. To study celestial objects and the astrophysical processes that are responsible for their radio emissions, images must be formed. This is done with the help of large radio telescope arrays.

Next generation radio telescopes such as the Low Frequency Array Radio Telescope (LOFAR) and the Square Kilometer Array (SKA), bring about increasingly more observational evidence for the study of the radio sky by generating very high resolution and high fidelity images. In this dissertation, we study radio astronomical imaging as the problem of estimating the sky spatial intensity distribution over the field of view of the radio telescope array from the incomplete and noisy array data. The increased sensitivity, resolution and sky coverage of the new instruments pose additional challenges to the current radio astronomical imaging pipeline. Namely, the large amount of data captured by the radio telescopes cannot be stored and needs to be processed quasi-realtime.

Many pixel-based imaging algorithms, such as the widely-used CLEAN [3] algorithm, are not scalable to the size of the required images and perform very slow in high resolution scenarios. Therefore, there is an urgent need for new efficient imaging algorithms. Moreover, regardless of the amount of collected data, there is an inherent loss of information in themeasurement process due to physical limitations. Therefore, to recover physically meaningful images additional information in the form of constraints and regularizing assumptions are necessary. The central objective of the current dissertation is to introduce advanced algebraic techniques together with custom-made regularization schemes to speed up the image formation pipeline of the next generation radio telescopes.

Signal processing provides powerful tools to address these issues. In the current work, following a signal processing model of the radio astronomical observation process, we first analyze the imaging system based on tools from numerical linear algebra, sampling, interpolation and filtering theory to investigate the inherent loss of information in the measurement process. Based on these results, we show that the imaging problem in radio astronomy is highly ill-posed and regularization is necessary to find a stable and physically meaningful image. We continue by deriving an adequate model for the imagingproblem in radio interferometry in the context of statistical estimation theory. Moreover, we introduce a framework to incorporate regularization assumptions into the measurement model by borrowing the concept of preconditioning from numerical linear algebra.

Radio emissions observed by radio telescopes appear either as distributed radiation from diffuse media or as compact emission from isolated point-like sources. Based on this observation, different source models need to be applied in the imaging problem formulation to obtain the best reconstruction performance. Due to the ill-posedness of the imaging problem in radio astronomy, to guarantee a reliable image reconstruction, propermodeling of the source emissions and regularizing assumptions are of utmost importance. We integrate these assumptions implementing a multi-basis dictionary based on the proposed preconditioning formalism.

In traditional radio astronomical imaging methods, the constraints and priormodels, such as positivity and sparsity, are employed for the complete image. However, large radio sky images usually manifest individual source occupancy regions in a large empty background. Based on this observation, we propose to split the field of view into multiple regions of source occupancy. Leveraging a stochastic primal dual algorithm we apply adequate regularization on each facet. We demonstrate the merits of applying facet-based regularization in terms of memory savings and computation time by realistic simulations.

The formulation of the radio astronomical imaging problem has a direct consequence on the radio sky estimation performance. We define the astronomical imaging problem in a Bayesian-inspired regularized maximum likelihood formulation. Based on this formalism, we develop a general algorithmic framework that can handle diffuse as well as compact source models. Leveraging the linearity of radio astronomical imaging problem, we propose to directly embody the regularization operator into the system by right preconditioning. We employ an iterative method based on projections onto Krylov subspaces to solve the subsequent system. The proposed algorithmis named PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA). We motivate the use of a beamformed image as an efficient regularizing prior-conditioner for diffuse emission recovery. Different sparsity-based regularization priors are incorporated in the algorithmic framework by generalizing the core algorithm with iterative re-weighting schemes.

We evaluate the performance of PRIFIRA based on simulated measurements as well as astronomical data and compare to the state-of-the-art imaging methods. We conclude that the proposed method maintains competitive reconstruction quality with the current techniques while remaining flexible in terms of different regularization schemes. Moreover, we show that the imaging efficiency can be greatly improved by exploiting Krylov subspace methods together with an appropriate noise-based stopping criteria.

Based on the results from this thesis we can conclude that with the help of advanced techniques from signal processing and numerical linear algebra, customized algorithms can be designed to tackle some of the challenges in the next generation radio telescope imaging. We note that since radio interferometric imaging can be considered as an instance of the broad area of inverse imaging problems, the numerical techniques as well as regularization methods developed in this dissertation have a direct impact on many different imaging application areas, such as biomedical and geophysics/seismic imaging.

Additional information ...


Signal Processing Seminar

Image Reconstruction Using Training Images

Per Christian Hansen
Technical University of Denmark

Priors are essential for computing stable solutions to ill-posed problems, and they take many different forms.  Here we consider priors in the form of cross-section images of the object, and this information must be used in a fast, reliable, and computationally efficient manner. We describe an algorithmic framework for this: From a set of training images we use techniques from machine learning to form a dictionary that captures the desired features, and we then compute a reconstruction with a sparse representation in this dictionary. We describe how to stably compute the dictionary through a regularized non-negative matrix factorization, and we study how this dictionary affects the reconstruction quality. Simulations show that for textural images our approach is superior to other methods used for limited-data problems.

About the speaker

Professor Per Christian Hansen has worked with numerical regularization algorithms for 30 years, and he has published 4 books and 100+ papers in leading journals. He has developed a number of software packages, of which Regularization Tools (now in its 4th version) is a popular toolbox for analysis and solution of discrete inverse problems. His current research projects involve algorithms for tomographic reconstruction and iterative image deblurring algorithms. He is a SIAM fellow in recognition of his work on inverse problems and regularization.

Additional information ...


MSc BME Thesis Presentation

The effect of dopamine release on electrical neural activity in the prefrontal cortex

Jack Tchimino

How can certain oscillations be detected from the measured brain signals?

Additional information ...


Signal Processing Seminar

Tensor-based blind source separation in epileptic EEG and fMRI

Borbala Hunyadi

Additional information ...


Joint optimization of milk intake and sleep time for babies under one month old

Elena

(daughter of Jorge)


Signal Processing Seminar

Biomedical signal processing/wavefield imaging

Patrick Fuchs

Additional information ...


Signal Processing Seminar

Signal Processing Mini-Symposium

Hagit Messer, KVS Hari, Andrea Simonetto

Talk 1: Capitalizing on the Cellular Technology Opportunities and Challenges for Near Ground Weather Monitoring

Prof. Hagit Messer
School of Electrical Engineering, Tel Aviv University, Israel

Talk 2: Spatial Modulation Techniques in Wireless Systems

Prof. K.V.S. Hari
Dept. of ECE, Indian Institute of Science, Bangalore

Talk 3: Time-varying optimization: algorithms and engineering applications

Dr. Andrea Simonetto
IBM Research Ireland, Dublin, Ireland

Additional information ...


PhD Thesis Defence

Spatio-temporal environment monitoring leveraging sparsity

Venkat Roy

Linking sensor measurements to unknown field intensities, with application to rainfall monitoring from cellular networks

Additional information ...


Signal Processing Seminar

manufacturing defect detection

Aydin Rajabzadeh

Additional information ...


Signal Processing Seminar

Machine learning in physical sciences

Peter Gerstoft
UC San Diego

Machine learning (ML) is booming thanks to efforts promoted by Google. However, ML also has use in physical sciences. I start with a general overview of ML for supervised/unsupervised learning. Then I will focus on my applications of ML in array processing in seismology and ocean acoustics. This will include source localization using neural networks or graph processing. Final example is using ML-based tomography to obtain high-resolution subsurface geophysical structure in Long Beach, CA, from seismic noise recorded on a 5200-element array. This method exploits the dense sampling obtained by ambient noise processing on large arrays by learning a dictionary of local, or small-scale, geophysical features directly from the data.

Additional information ...


MSc ME Thesis Presentation

Design Space Exploration of a Neuromorphic ECG Classification System using a Spiking Self-Organizing Map

Johan Mes

The Self-Organizing Map (SOM) is an unsupervised neural network topology that incorporates competitive learning for the classification of data. In this thesis we investigate the design space of a system incorporating such a topology based on Spiking Neural Networks (SNNs), and apply it to classifying Electrocardiogram (ECG) beats. We present novel insights into the characterization of the SOM and its encapsulating system by exploring configuration parameters such as learning rate, neuron models, potentiation and depression ratios, and synaptic conductivity parameters by performing high-level architectural simulations of the system whose SNN is developed with the aim of being implemented using power efficient neuromorphic hardware.

Due to the amount of manual work needed to monitor and analyze ECG signals when diagnosing cardiovascular problems, and because it is the leading cause of death in the world, an automated, realtime, and low power detection & classification system is essential. Unsupervised and in realtime, this system performs beat detection with an average TPR of 99.10% and a PPV of 99.58% and classification of 500 detected beats with an EMDS of 0.0169 and a beat recognition percentage of 100%.


Signal Processing Seminar

Signal processing algorithms for acoustic vector sensors

Krishnaprasad Nambur Ramamohan


MSc SS Thesis Presentation

Phase estimation of recurring patterns in nonstationary signals

Rik van der Vlist

A phase estimation algorithm is presented to estimate the phase of a recurring pattern in a nonstationary signal. The signal is modelled by a template signal that represents one revolution of the recurring pattern, and that the frequency of this pattern can change at any time with no assumptions about local stationarity. The algorithm uses a constraint maximum likelihood estimator (MLE) to estimate the phase of the recurring pattern in the time series. Using the dynamic programming techniques from the dynamic time warping (DTW) algorithm, the solution is found in an efficient manner. The algorithm is applied to the digitization of meter readings from analog consumption meters.

As of today, analog consumption meters are still widely used to measure the consumption of gas, electricity and water. Often, smart home appliance use a simple reflective photosensor located on a rotating part of the meter to obtain information about the state of the consumption meter. The algorithm presented in this thesis accurately estimates the phase of the repeating pattern that occurs in the sensor observation when the meter rotates. Using this estimate, the signal of the photosensor can be converted to an estimate of the total resource consumption and consumption rate.

The algorithm improves in accuracy over conventional methods based on peak detection, and is shown to work in cases where the peak detection methods fails. Examples of this are signals where there is no distinctive peak in the signal or a signal where the recurring pattern is reversed. Furthermore, a template compression scheme is proposed that is used to decrease the computational complexity of the algorithm. Different time series compression methods are applied to the algorithm and evaluated on their performance.

Additional information ...


MSc SS Thesis Presentation

Snoring Sound Production and Modelling; Acoustic Tube Modelling

Mert Ergin

The thesis project is aimed at designing an unobtrusive method to determine from snoring sounds the obstruction location and severity for persons that have benign snoring and that are residing in their natural sleep environment.

Similar to speech generation, which is enabled by opening and closing of the vocal cords, the sound of snoring consists of a series of impulses caused by the rapid obstruction and reopening of parts of the upper airway. By exploiting this similarity, we try to explain snoring sound production using analysis and synthesis techniques that have been applied to speech. In particular, 1-D tube models and linear prediction have been studied and employed. 

From a trial on anti-snoring device efficacy, audio recordings were available. These contained snoring sounds of multiple participants over multiple nights and were used for analysis. Similar to speech analysis, the Iterative Adaptive Inverse Filtering (IAIF) method has been used to find excitation flow and airway tract transfer functions from snoring sounds during the inhalation period. It has been found that the linear prediction part in IAIF is not suitable to directly determine the cross-sectional area of the upper airway from snoring sounds. 

A model has been built using acoustic tube theory with the purpose of reflecting the physical realities of the upper airways. The tube model allows modeling flow obstructions at arbitrary positions in the upper airways. The transfer spectra from the acoustic tube having obstructions at various positions can be compared to the tube model resulting from IAIF using a gain-independent Itakura Spectral Distance Measure and the best match can be determined. This alternative approach was found to indicate obstruction positions which vary over persons. Further research is required to establish if the thus determined locations are adequately matching the true snoring cause and constitute a viable way for generating advice for anti-snoring devices and measures.

Additional information ...


Signal Processing Seminar

Tutorial on: Sum-of-squares Representation in Optimization and Applications in Signal Processing

Tuomas Aittomäki

Additional information ...