Yonina Eldar
Digital applications have developed rapidly over the last few decades. Since many sources of information are of analog or continuous-time nature, discrete-time signal processing (DSP) inherently relies on sampling a continuous-time signal to obtain a discrete-time representation. Consequently, sampling theories lie at the heart of signal processing applications and communication systems. To accommodate high operating rates while retaining low computational cost, efficient analog-to-digital (ADC) and digital-to-analog (DAC) converters must be developed. Many of the limitations encountered in current converters is due to a traditional assumption that the sampling stage must acquire the data at the Shannon-Nyquist rate, corresponding to twice the signal bandwidth. To avoid aliasing, a sharp low-pass filter must be implemented prior to sampling. The reconstructed signal is also a bandlimited function, generated by integer shifts of the sinc interpolation kernel.
A major drawback of this paradigm is that many natural signals are better represented in alternative bases other than the Fourier basis, or possess further structure in the Fourier domain. In addition, ideal point-wise sampling, as assumed by the Shannon theorem, cannot be implemented. More practical ADCs introduce a distortion which should be accounted for in the reconstruction process. Finally, implementing the infinite sinc interpolating kernel is difficult, since it has slow decay. In practice, much simpler kernels are used such as linear interpolation. Therefore there is a need to develop a general sampling theory that will accommodate an extended class of signals beyond bandlimited functions, and will account for the nonideal nature of the sampling and reconstruction processes.
In this tutorial we present several extensions of the Shannon theorem, that have been developed primarily in the past two decades, which treat a wide class of input signals as well as nonideal sampling and nonlinear distortions. This framework is based on viewing sampling in a broader sense of projection onto appropriate subspaces, and then choosing the subspaces to yield interesting new possibilities such as below Nyquist sampling of sparse signals, pointwise sampling of non bandlimited signals, and perfect compensation of nonlinear effects.
We begin by presenting a broad class of sampling theorems for signals confined to an arbitrary subspace in the presence of non-ideal sampling, and nonlinear distortions. Surprisingly, many types of nonlinearities that are encountered in practice do not posses any technical difficulty and can be completely compensated for. Next, we introduce minimax recovery techniques that best approximate an arbitrary smooth input signal. These methods can also be used to reconstruct a signal using a given interpolation kernel that is easy to implement, with only a minor loss in signal quality. To further enhance the quality of the interpolated signal, we discuss fine grid recovery techniques in which the system rate is increased during reconstruction. As we show, the algorithms we develop can all be extended quite naturally to the recovery of random signals. Finally, we discuss sparse analog signals that can be represented by a disjoint set of bands in some transform domain. Combining traditional sampling ideas with results from the field of compressed sensing we show how to reconstruct an analog multi-band signal from minimum-rate samples when the band locations are unknown. More generally we show how the recent ideas developed in the context of compressed sensing can be generalized to analog signals as well.
These additional aspects extend the existing sampling framework and incorporate more realistic sampling and interpolation models. Since DSP applications all inherently rely on the ability to perfectly sample and represent analog signals, we believe that an overview including a broad framework based on many new developments in the field in recent years is relevant to a large portion of the signal processing community.
Yonina C. Eldar received the B.Sc. degree in Physics in 1995 and the B.Sc. degree in Electrical Engineering in 1996 both from Tel-Aviv University (TAU), Tel-Aviv, Israel, and the Ph.D. degree in Electrical Engineering and Computer Science in 2001 from the Massachusetts Institute of Technology (MIT), Cambridge. From January 2002 to July 2002 she was a Postdoctoral Fellow at the Digital Signal Processing Group at MIT. She is currently an Associate Professor in the Department of Electrical Engineering at the Technion - Israel Institute of Technology, Haifa, Israel. She is also a Research Affiliate with the Research Laboratory of Electronics at MIT. From 1992 to 1996 she was in the program for outstanding students at TAU. In 1998, she held the Rosenblith Fellowship for study in Electrical Engineering at MIT, and in 2000, she held an IBM Research Fellowship. From 2002-2005 she was a Horev Fellow of the Leaders in Science and Technology program at the Technion and an Alon Fellow. In 2004, she was awarded the Wolf Foundation Krill Prize for Excellence in Scientific Research, in 2005 the Andre and Bella Meyer Lectureship, in 2007 the Henry Taub Prize for Excellence in Research, and in 2008 the Hershel Rich Innovation Award, the Award for Women with Distinguished Contributions, and the Muriel & David Jacknow Award for Excellence in Teaching. She is a member of the IEEE Signal Processing Theory and Methods technical committee, an Associate Editor for the IEEE Transactions on Signal Processing, the EURASIP Journal of Signal Processing, and the SIAM Journal on Matrix Analysis and Applications, and on the Editorial Board of Foundations and Trends in Signal Processing.