Agenda

Signal Processing Seminar

Electromagnetic 3D anisotropic imaging in the model reduction framework

Jörn Zimmerling
T.U. Delft

Additional information ...


Signal Processing Seminar

Signal Processing on Kernel-based Random Graphs

Matthew Morency

We present the theory of sequences of random graphs and their convergence to limit objects. Sequences of random dense graphs are shown to converge to their limit objects in both their structural properties and their spectra. The limit objects are bounded symmetric functions on $[0,1]^2$. The kernel functions define an equivalence class and thus identify collections of large random graphs who are spectrally and structurally equivalent. As the spectrum of the graph shift operator defines the graph Fourier transform (GFT), the behavior of the spectrum of the underlying graph has a great impact on the design and implementation of graph signal processing operators such as filters. The spectra of several graph limits are derived analytically and verified with numerical examples.

Additional information ...


Signal Processing Seminar

Rethinking Sketching as Sampling: A Graph Signal Processing Approach

Fernando Gama

Sampling of bandlimited graph signals has well-documented merits for dimensionality reduction, affordable storage, and online processing of streaming network data. Most existing sampling methods are designed to minimize the error incurred when reconstructing the original signal from its samples. Oftentimes these parsimonious signals serve as inputs to computationally-intensive linear operator (e.g., graph filters and transforms). Hence, interest shifts from reconstructing the signal itself towards instead approximating the output of the prescribed linear operator efficiently.

In this context, we propose a novel sampling scheme that leverages the bandlimitedness of the input as well as the transformation whose output we wish to approximate. We formulate problems to jointly optimize sample selection and a sketch of the target linear transformation, so when the latter is affordably applied to the sampled input signal the result is close to the desired output. These designs are carried out off line, and several heuristic (sub)optimal solvers are proposed to accommodate high-dimensional problems, especially when computational resources are at a premium.

Similar sketching as sampling ideas are also shown effective in the context of linear inverse problems. The developed sampling plus reduced-complexity processing pipeline is particularly useful for streaming data, where the linear transform has to be applied fast and repeatedly to successive inputs or response signals.

Numerical tests show the effectiveness of the proposed algorithms in classifying handwritten digits from as few as 20 out of 784 pixels in the input images, as well as in accurately estimating the frequency components of bandlimited graph signals sampled at few nodes.

Additional information ...


MSc SS Thesis Presentation

Multiway Component Analysis for the Removal of Far Ventricular Signal in Unipolar Epicardial Electrograms of Patients with Atrial Fibrillation

Jelimo Maswan

Atrial fibrillation (AF) is one of the more common clinical arrhythmias with a high morbidity and mortality. Despite this, the electrophysiological and pathological mechanisms associated with AF largely remain a mystery, encouraging the use of ever more sophisticated techniques to extract vital information for diagnostic and therapeutic purposes. Contamination by signals of ventricular origin is considered the main artifact present in high-resolution epicardial electrograms (EGMs) that hinders the accurate and efficient analysis of AF EGM datasets. Furthermore, the complexity and dynamism of AF signals calls for robust data analysis tools that can effectively reduce or remove ventricular activity (VA) while preserving the texture and morphology of atrial activity (AA).

Multiway component analysis, specifically block term decomposition (BTD), proves useful for the decontamination of epicardial EGMs as demonstrated in this project by enabling the automatic estimation of VA on an electrode-by-electrode basis, which is thereafter temporally and/or power spectrally subtracted thus retaining AA at a relatively high accuracy.

The performance of BTD compared to average beat subtraction (ABS) and the more restrictive canonical polyadic decomposition (CPD) is visually verified and numerically confirmed based on a set of key performance indices. Additionally, the technique is entirely data-driven i.e., does not depend on any statistical properties, but if/when available, can contribute to enhanced performance via the imposition of appropriate constraints in the tensor decomposition.

Additional information ...


Signal Processing Seminar

Graph Sampling for Covariance Estimation

Geert Leus

In this talk, the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers.

Additional information ...


Signal Processing Seminar

Accurate Calculation of the Mean Strain of Non-uniform Strain Fields Using a Conventional FBG Sensor

Aydin Rajabzadeh

In the past few decades fibre Bragg grating (FBG) sensors have gained a lot of attention in the field of distributed point strain measurement. One of the most interesting properties of these sensors is the presumed linear relationship between the strain and the peak wavelength shift of FBG reflection spectra. However, subjecting sensors to a non-uniform stress field will in general result in a strain estimation error when using this linear relationship, which is due to the difference between the average strain value over the length of the sensor and the point strain value based on the peak wavelength shift of the FBG reflected spectra. In this presentation, we will first introduce a new formulation for analysis of FBG reflected spectra under an arbitrary strain distribution. The presented method is an approximation of the classic transfer matrix model, and will be called the approximated transfer matrix model or ATMM. Using the properties of this new formulation, a new method will be presented that compensates for the mean strain estimation error, and it will be validated using simulations and experimental FBG measurements.

Additional information ...


Signal Processing Seminar

Tuomas Aittomäki

With the increasing demand for wireless communication with high data rates, more and more spectral resources are allocated for communication systems. This has lead to the risk of decreasing spectrum allocated exclusively for radars. As the use of radars is likely to increase in future, it is necesary to look at how radar and communication systems could co-exist. This talk is an overview of developments in shared use of spectral and hardware resources between these systems.

Additional information ...


Signal Processing Seminar

Blind calibration of radio astronomical phased arrays

Stefan Wijnholds
ASTRON

Radio astronomical phased arrays are usually calibrated under the assumption that the observed scene is known. That assumption may not hold if a new frequency window is opened (for example in the case of the currently planned antenna arrays in space observing below 10 MHz) or when there are unexpected source signals such as transients or radio frequency interference (RFI). In this talk, I present recent work on blind calibration of radio astronomical phased assuming that the observed scene is sparse. The resulting method applies sparse reconstruction techniques to the measured array covariance matrices instead of time series data. I discuss the computation speed-up provided by this shift from signal domain to power domain and explain how phase transition diagrams need to be reinterpreted in this context 

Additional information ...


PhD Thesis Defence

Signal Strength Based Localization and Path-Loss Exponent Self-Estimation in Wireless Networks

Yongchang Hu

In wireless communications, received signal strength (SS) measurements are easy and convenient to gather. SS-based techniques can be incorporated into any device that is equipped with a wireless chip.

This thesis studies SS-based localization and path-loss exponent (PLE) self-estimation. Although these two research lines might seem unrelated, they are actually marching towards the same goal. The former can easily enable a very simple wireless chip to infer its location. But to solve that localization problem, the PLE is required, which is one of the key parameters in wireless propagation channels that decides the SS level. This makes the PLE very crucial to SS-based localization, although it is often unknown. Therefore, we need to develop accurate and robust PLE self-estimation approaches,which will eventually contribute to the improvement of the localization performance.

We start with the first research line, where we try to cope with all possible issues that we encounter in solving the localization problem. To eliminate the unknown transmit power issue, we adopt differential received signal strength (DRSS) measurements. Colored noise, non-linearity and non-convexity are the next three major issues. To deal with the first two, we introduce a whitened linear data model for DRSSbased localization. Based on that and assuming the PLE is known, three different approaches are respectively proposed to tackle the non-convexity issue: an advanced best linear unbiased estimator (A-BLUE), a Lagrangian estimator (LE) and a robust semidefinite programming (SDP)-based estimator (RSDPE). To cope with an unknown PLE, we propose a robust SDP-based block coordinate descent estimator (RSDP-BCDE) that jointly estimates the PLE and the target location. Its performance iteratively converges to that of the RSDPE with a known PLE.

As mentioned earlier, while generating DRSS measurements, we eliminate the unknown transmit power. This is very similar to the way time-difference-of-arrival (TDOA) methods cope with an unknown transmit time. Both of them use a differencing process to cope with an unknown linear nuisance parameter. Our DRSS study shows the differencing process does not cause any information loss and hence the selection of the reference is not important. However, this apparently contradicts what is commonly known in TDOA-based localization, where selecting a good reference is very crucial. To resolve this conflict, we introduce a unified framework for linear nuisance parameters such that all our conclusions apply to any kind of problem that can be written into this form. Three methods that can cope with linear nuisance parameters are considered by investigating their best linear unbiased estimators (BLUEs): joint estimation, orthogonal subspace projection (OSP) method and differential method. The results coincide with those obtained in our DRSS study. For TDOA-based localization, it is actually the modelling process that causes a reference dependent information loss, not the differencing process. Many other interesting conclusions are also drawn here.

Next, we turn our attention to the second research line. Undoubtedly, knowledge of the PLE is decisive to SS-based localization and hence accurately estimating the PLE will lead to a better localization performance. However, estimating the PLE also has benefits for other applications. If each node can self-estimate the PLE in a distributed fashion without any external assistance or information, it might be very helpful for efficiently designing some wireless communication and networking systems, since the PLE yields a multi-faceted influence therein. Driven by this idea, we propose two closedform (weighted) total least squares (TLS) methods for self-estimating the PLE, which are merely based on the locally collected SS measurements. To solve the unknown nodal distance issue, we particularly extract information fromthe random placement of neighbours in order to facilitate the derivations. We also elaborate on many possible applications thereafter, since this kind of PLE self-estimation has never been introduced before.

Although the previous two methods estimate the PLE by minimizing some residue, we also want to introduce Bayesian methods, such as maximizing the likelihood. Some obstacles related to such approaches are the totally unknown distribution for the SS measurements and the mathematical difficulties of computing it, since the SS is subject to not only the wireless channel effects but also the geometric dynamics (the random node placement). To deal with that, we start with a simple case that only considers the geometric path-loss for wireless channels. We are the first to discover that in this case the SS measurements in random networks are Pareto distributed. Based on that, we derive the CRLB and introduce two maximum likelihood (ML) estimators for PLE selfestimation. Although we considered a simplified setting, finding the general SS distribution would still be very useful for studying wireless communications and networking.

Additional information ...


MSc BME thesis presentation

System Building Blocks for Mathematical Operators Using Stochastic Resonance -- Application in an Action Potential Detection System

Insani Abdi Bangsa

MSc thesis presentation on Stochastic Resonance Systems for Biomedical Applications