More than 60 people attended the first version of SETIA, on April 23, 2010. Two tutorial-type talks took place on the first half, followed by 4 specialized talks, with the participation of speakers from Chile and Denmark. The following is the list of talks, including abstacts and links to their slides:
- “Una Introducción Intuitiva a la Teoría de la Información” (en español)
Slides
Dr. Milan S. Derpich
Departamento de Electrónica,
Universidad Técnica Federico Santa María, ChileAbstract:
En esta charla tutorial se redescubrirá, a partir de la
noción humana y cotidiana de información, las expresiones y cantidades
básicas utilizadas en la Teoría de la Información. Mediante un razonamiento
simple e intuitivo, se derivarán y discutirán las expresiones para la
entropía, la entropía condicional y la información mutua, y se dará un vistazo
a la propiedad de la equipartición asintótica y a algunas de sus
consecuencias. Finalmente se enunciarán y discutirán algunos de los resultados
más importantes de esta teoría, tales como le teorema de capacidad de canales
sin ruido, el de capacidad con canales con ruido, el teorema de la separación
y otros resultados asociados al compromiso entre tasa de datos y distorsión.
- “An Introduction To Multiple-Description Coding — A Joint Source-
Channel Coding Paradigm and its Potential Application to Digital TV”
Slides
Dr. Jan Ostergaard,
Aalborg University, DenmarkAbstract:
In 1948 Claude E. Shannon provided a coding theorem for joint source-channel
coding; basically stating that if the source coding rate is strictly below the
channel capacity, then reliable communication is possible. Moreover, in the
direct part of the proof, he showed that, for stationary memoryless sources
and channels, it is in fact possible to perform separate source and channel
coding without any loss. Specifically, the source coding can be done without
taking the channel into account and the channel coding can be done without any
knowledge of the source. However, in order to achieve such source-channel
separation, encoding schemes of high delay and complexity are generally
required. Moreover, today’s communication networks have a heterogeneous
infrastructure which deviates from earlier point-to-point scenarios. Thus, the
separation theorem is not directly applicable to these kind of situations and
it is often possible to improve the performance by exploiting joint source-
channel coding techniques.In this talk, we provide an introduction to a joint source-channel coding
technique known as multiple-description (MD) coding. In MD coding, a single
source is encoded into several descriptions. All the descriptions are able to
individually approximate the source to within prescribed fidelities.
Furthermore, the descriptions are able to refine each other and thus jointly
provide improvements. The potential applicability of MD coding to digital
video broadcast is also discussed.
- “Non-Product Data-Dependent Partitions for Mutual Information Estimation: Strong Consistency and Applications”
Slides
Dr. Jorge Silva
Department of Electrical Engineering
Facultad de Ciencias Físicas y Matemáticas
Universidad de ChileAbstract:
The problem of mutual information (MI) estimation based on data-dependent
partition is addressed in this work. A histogram-based construction
is proposed, considering non-product data-dependent partitions, and
sufficient conditions are stipulated to guarantee a strongly consistent
estimate for mutual information.
On the applications of this result two emblematic families of density-free
strongly consistent estimates are derived, one based on statistically
equivalent blocks (the Gessaman’s partition) and the other, on a
tree-structured vector quantization scheme.
Preliminary experimental results demonstrate the superiority
of these data-driven techniques in terms of
a bias-variance analysis when compared to
conventional product histogram-based and kernel plug-in estimates.
- Title: “A Framework for Control System Design Subject to Average Data-Rate Constraints”
Slides
Dr. Eduardo I. Silva
Departamento de Electrónica
Universidad Técnica Federico Santa María, ChileAbstract:
In this talk we will study the performance of control systems subject
to average data-rate limits. By focusing on a class of source coding schemes
built around entropy coded dithered quantizers, we will describe a novel
framework to deal with such constraints in a tractable manner that combines
ideas from both information and control theories. The focus is on a situation
where a noisy linear system has been designed assuming transparent feedback
and, due to implementation constraints, a source coding scheme (with unity
signal transfer function) has to be deployed in the feedback path. The aim is
to design such coding scheme so as to minimize the impact of quantization on
the variance of a certain error signal (e.g., tracking error). For this
problem, a closed form upper bound on the best achievable performance for a
given average data-rate constraint will be given. We will also study the
interplay between stability and average data-rates for the considered
architecture. It will be shown that the proposed class of coding schemes can
achieve mean square stability at average data-rates that are, at most, 1.254
bits per sample away from the absolute minimum rate for stability established
by Nair and Evans. This rate penalty is compensated by the simplicity of the
proposed approach.
- “Improved Upper Bounds to the Causal Quadratic Rate-Distortion Function for Gaussian Stationary Sources”
Slides
Dr. Milan S. Derpich
Departamento de Electrónica,
Universidad Técnica Federico Santa María, ChileAbstract:
The minimum data rate required to encode a random source with mean squared
error (MSE) distortion D is given by its rate-distortion function (RDF),
commonly denoted by R(D). The RDF for Gaussian stationary sources was fully
characterized shortly after Claude Shannon introduced the concept. However, it
is well known that achieving this RDF requires the use of non-causal filters,
which in practice implies unbounded delays. In contrast, much less is known
about the RDF for such sources under the additional constraint of causality or
zero delay. Causal and zero-delay coders are attractive in applications such
as voice and video coding, as well as in feedback systems.In this talk, we improve the existing achievable rate regions for causal and
for zero-delay source coding of stationary Gaussian sources for mean squared
error (MSE) distortion. First, we define the information-theoretic causal rate-
distortion function (RDF), In order to analyze we
introduce the information theoretic causal RDF when
the reconstruction error is jointly stationary with the source. We then derive
four closed form upper bounds to the gap between and Shannon’s
RDF, two of them strictly smaller than 0.5 bits/sample at all rates, and show
that can be realized by an AWGN channel surrounded by a
unique set of causal pre-, post-, and feedback filters. A key result is showing
that finding such filters constitutes a convex optimization problem and propose
an iterative procedure to solve it. Finally, we build upon
to improve existing bounds on the optimal performance attainable by causal
and zero-delay codes. This talk presents the results in a paper recently
accepted for ISIT2010.
- “Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source”
Slides
Dr. Jan Ostergaard
Aalborg University, DenmarkAbstract:
In this talk we address the connection between the multiple-description (MD)
problem and Delta-Sigma quantization. Specifically, we exploit the inherent
redundancy due to oversampling in Delta-Sigma quantization, and the simple
linear-additive noise model resulting from dithered lattice quantization, in
order to construct a symmetric MD coding scheme. We show that the use of
feedback by means of a noise shaping filter makes it possible to trade off
central distortion for side distortion. We then turn our attention to
Gaussian sources with memory. Specifically, we consider stationary (colored)
Gaussian sources and combine noise shaping and source prediction. We first
propose a new representation for the test channel that realizes the MD rate-
distortion function of a Gaussian source, both in the white and in the colored
source case. We then show that this test channel can be materialized by
embedding two source prediction loops, one for each description, within a
common noise shaping loop. While the noise shaping loop controls the trade-off
between the side and the central distortions, the role of prediction (like in
differential pulse code modulation) is to extract the source innovations from
the reconstruction at each of the side decoders, and thus reduce the coding
rate. Finally, we show that this scheme achieves the MD rate-distortion
function at all resolutions and all side-to-central distortion ratios, in the
limit of high dimensional quantization.