Date: Tue, 26 Nov 1996 00:54:35 GMT
Server: Apache/1.1-dev
Content-type: text/html
Set-Cookie: Apache=gs35624184896967582; path=/
CURRENT COMMUNICATIONS RESEARCH LABORATORIES AND PROJECTS
Information Theory and Information Retrieval
Graduate Students: N. Leung and S. Lawson
Professors: J. T. Coffey and S. Sechrest
This project examines the fundamental limits of efficient
context-based retrieval in large scale databases, such as
those used in scientific and medical databases. The nature
and quantity of the data involved in these applications
demand new approaches, and it is the goal of this project to
investigate the role that the methods of information theory
can play in this task. Based on a number of simplified
abstract models, we have derived results that demonstrate
that expected access time can be greatly reduced in general
by adding redundancy to the database. The general problem
involves a number of interesting variants of classical
problems in source and channel coding and multi-user
information theory.
In developing theoretical results in this research, we are
aiming to acquire insight that will be used to provide
first-order guidance in system design. Further guidance will
be found by comparisons with results arrived at through more
detailed simulation of systems. The applicability and
robustness of our results and others are being investigated.
Actual datasets and realistic workloads can be used to
validate our models and assess the applicability of our
results to physical systems.
Maximum Likelihood Sequence Estimation for Asynchronous Data
Communications
Graduate Student: I. Sharfer
Professor: A. O. Hero
We are developing techniques for maximum likelihood sequence
estimation for asynchronous multiple access communications
with coherent spatial diversity using a receiver antenna
array. This project involves aspects of estimation theory
and lower bound analysis, iterative implementations of
maximum likelihood (Viterbi and EM algorithms), multiple
access communications, and antenna array processing (maximum
likelihood beamforming, direction finding, power
estimation). We have obtained a non-trivial extension of
the Snyder-Georghiades sequqnce estimation algorithm to the
array receiever which includes spread spectrum modulation
and the effects of Rayleigh fading. This has been achieved
using an iterative maximum likelihood algorithm based on a
generalization, called space alternating generalized EM
(SAGE), of the expectation-maximization (EM) algorithm which
was recently developed by Fessler-Hero for problems in
tomographic reconstruction. The resulting SAGE-type
sequence estimation algorithm yields maximum likelihood
estimates which are of much lower complexity than
Snyder-Georghiades, involve no approximations, and are
easily generalizable to multipath and doppler shift common
in mobile radio communications.
Research papers of Prof. D. L. Neuhoff and
his students at the EECS Dept.
of the University of Michigan can be accessed by anonymous
FTP to ftp.eecs.umich.edu in the directory
/people/neuhoff.
Or, from the EECS Dept. network, the path is
/n/ftp/f/people/neuhoff.
Structured Vector Quantization and Asymptotic Quantization
Theory
Graduate Students: D. Hui, A. Balamesh, and D. Lyons
Professor: D. L. Neuhoff
Sponsors: National Science Foundation
Vector quantization is increasingly being used as a lossy
data compression technique for sources such as speech,
images, and video. Practical vector quantizers "structure"
their codebooks to simplify encoding and decoding. For
example, block transform, CELP, tree-structured, two-stage,
lattice, quadtree, product, pyramid, and finite-state vector
quantizers are common techniques, listed roughly in
decreasing order of structure. Although structure generally
has an adverse effect on rate/distortion performance, it
permits the use of quantizers with larger dimensions, which
usually results in much better performance for a given
complexity. Until now there has been little theory to
explain the complexity performance tradeoff of structured
vector quantizers.
This project is developing new methods for analyzing
structured vector quantizers. One is an extension of
Bennett's integral to vector quantizers. It shows how the
mean-squared error depends on the distribution and shape of
quantization cells. Another is an asymptotic formula for
the probability density of the quantization error. These
new methods have lead to the successful analysis of several
structured vector quantizers, including tree-structured and
two-stage quantizers. Still another is the analysis of
transform coders at very low rates.
The insight gained from this analysis has also led to a new
form of two-stage quantization, called cell-conditioned
multi-stage VQ, that has the same low complexity advantages
of traditional multi-stage quantization, but asymptotically
suffers no loss in performance relative to unstructured
quantization. It has also lead to new high performance, low
complexity methods for converting entropy coders into
fixed-rate coders.
Model-Based Digital Image Halftoning
Professor: D. L. Neuhoff
New model-based approaches to halftoning are being
developed. They use well-known models of visual perception
along with models of printing that we have developed. One
approach minimizes the mean-squared error between the
perceived intensity of the continuous-tone image and the
perceived intensity of the printed halftoned image. Another
is an adaptation of the well-known error diffusion method to
include the printer model. Traditional approaches, for
example, ordered clustered dither, obtain robustness to
printer distortions, such as ink spreading, at the expense
of spatial resolution and the visibility of graininess. In
contrast, our new methods exploit the printer distortions to
produce higher quality images than would be obtained with
RperfectS printers. Improvements due to model-based
halftoning are expected to reduce the resolution
requirements for laser printers used in high-quality
printing (e.g., 400 dots/inch instead of 600). Model-based
halftoning can be especially useful in transmission of
high-quality documents using high-fidelity, gray-scale image
encoders. In such cases, halftoning is performed at the
receiver, just before printing. Apart from coding
efficiency, this approach permits the halftoner to be tuned
to the individual printer, whose characteristics may vary
considerably from those of other printers, for example,
write-black vs. write-white laser printers.
Image Coding
Gracuate Students: M. Horowitz, M. Slyz
Professor: D. L. Neuhoff
Image coding is the process of creating binary image
representations with the dual goals of efficiency (as few
bits as possible in the representation) and accuracy (the
reproduced images shall be as similar as possible to the
original). Two approaches are being pursued. The first
involves the use of a detailed model of the intermediate
level human visual sensors to construct transform codes that
hide quantization noise. The second involves the design of
lossless image codes based on adaptive prediction, with new
kinds of predictors and adaptation strategies. These
lossless image codes are intended for applications, such as
medical imaging, where an exact reproduction of the image is
required. On the other hand, the first project is intended
for more everyday applications where exact reproduction is
not necessary, but good quality and high efficiency are
needed.
Performance and Complexity of CDMA Networks with
Coded-Modulation
Graduate Students: M. Klimesh, W. Sung
Professors: W. Stark and J. T. Coffey
Sponsors: National Science Foundation
There are two parts of this research project. The first part deals
with decoding algorithms for worst case interference. In
this work we have derived transmission and decoding
strategies that allows for maximum information transmission
or minimum error probability. These strategies allow the
transmitter to vary the transmission power pseudorandomly.
The performance of a maximum likelihood decoding algorithm
against the worst case jammer can be improved. Worst case
interference is derived as well as the resulting
performance.
The second part involves developing novel decoders and
demodulators that achieve favorable tradeoffs of complexity
versus performance. Within this area a number of current
topics are being investigated, such as the design of minimal
trellises for block codes, fundamental limits for decoders
with a reduced number of states, decoding algorithms for
time-varying channels, and so on.
Spread-Spectrum in Faded Channels
Graduate Students: D. Goeckel and V. Chang
Professor: W. Stark
In this project we are examining the performance of
spread-spectrum systems in a faded channel. The type of
fading is such that in one spread-spectrum system with a
(relatively) small bandwidth the fading appears to be
nonselective in frequency. In another spread-spectrum
system the fading is frequency selective. The goal is to
examine the performance of a direct-sequence spread-spectrum
system operating in the presence of multipath fading with
error control coding. The important issues are the channels
memory, the selectivity of the channel, the synchronization
algorithm, and the decoding approach. These are issues that
are not very well understood by system designers presently.
Our preliminary results indicate that for a nonselective
channel larger spreading improves performance in spite of
the fact that more of the received energy is treated as
interference rather than part of a faded signal. In other
words, as the bandwidth increases the multipath channel
becomes more resolveable and signals that were unresolved
before can be resolved and rejected by the processing gain
of the spread-spectrum system.
Communicating Over Power Lines
Graduate Student: Y-P. Wang
Professor: W. Stark
The goal of this research is to investigate different
alternatives for transmitting data over a power line. The
power line suffers from distortion because of the nonideal
characteristics of the media. This comes in two forms. The
first is due to the attenuation varying as a function of
frequency. The second form is multipath due to reflections
off of mismatched lines. Transmitting data over power lines
using spread-spectrum techniques can mitigate the distortion
present in the channel. We are investigating different
modulation and coding schemes with spread-spectrum for
transmitting data over the power line.
Optical Communications and Very Noisy Channels
Graduate Student: S. Lee
Professor: K. A. Winick
A very noisy channel is a channel whose capacity is close to
zero. Very noisy channels (VNCs) can be used to model many
physical channels operating at low signal-to-noise ratios.
More importantly, a large class of physical channels,
operating at arbitrary signal-to-noise ratios, can be
modeled as repeated uses of a VNC. In particular, this is
true for the infinite bandwidth additive white Gaussian
noise channel and the direct detection optical Poisson
channel. The error exponent indicates the best achievable
performance of any block codes used over a communications
channel. A code which achieves this best performance is
said to be exponentially optimum. For most channels, the
error exponent is not known and can only be bounded. In
this research, the error exponent is computed exactly for a
large class of VNCs, and exponentially optimum codes are
explicitly constructed for these channels. These ideas are
applied to derive both the error exponent and exponentially
optimum codes for the direct detection,
polarization-switched, optical channel.
Distance Bounds for Runlength-Constrained Codes
Graduate Student: S-H. Yang
Professor: K. A. Winick
One of the most basic problems in coding theory is to find
the largest code of a given length and minimum distance.
There are several known upper and lower bounds when the
codewords are unconstrained. In many digital transmission
and recording systems, considerations such as spectral
shaping, self-clocking, and reduction of intersymbol
interference require that the recorded sequences satisfy
special run-length constraints. In this research distance
bounds and the construction of runlength-constrained
error-correcting codes are investigated. Upper bounds are
derived for the minimum achievable distance of
runlength-constrained sequences, and lower bounds are also
derived which include cost constraints.
Runlength-Constrained Write-Once Memories
Graduate Student: S-H. Yang
Professor: K. A. Winick
Sponsors: Office of Naval Research; Office of Naval
Technology
A write-once memory (WOM) is a storage medium where the
value in each bit location can only be changed from the
virgin 0-state to the permanent 1-state irreversibly. Data
can be recorded by marking blank (i.e., 0-state) bits.
Those marked locations are stuck in the 1-state and hence
limit to some degree further use of the memory. Examples of
WOMs in the electronic and computer industry are punch
cards, paper tapes, PROMs and optical disks. Current
laser-optics technology produces the "write-once" CD-ROMs
that are especially sutiable for storing archival data.
Usually this data must be periodically updated after it has
been initially recorded. If we can re-use the write-once
disk by implementing an efficient coding scheme, then the
expense of replacing the whole disk may be saved. In this
research, the ultimate capacity of runlength-constrained
write-once memories is investigated using techniques from
information theory.
Corrugated Waveguide Filters
Graduate Students: C. Brooks and G. Vossler
Professor: K. A. Winick
Sponsor: National Science Foundation
Corrugated thin film waveguides play a major role in
lightwave devices. Applications include distributed
feedback lasing, bistable switching, phase matching in
nonlinear materials, pulse compression, grating coupling,
and optical filtering. In many of these applications, the
corrugation is periodic. In an aperiodically corrugated
thin film waveguide, however, the frequency dependent
coupling between waveguide modes can be used to produce a
filter which has a specified spectral response. Inverse
scattering techniques have been developed for designing such
filters, and efforts are currently underway to fabricate
these devices. Several new fabrication techniques are being
pursued. These include an optical direct write method,
based on photobleaching gamma ray-induced defect centers in
ion-exchangeable glasses, and a Rbent waveguideS approach.
The first filter to be demonstrated will compensate for
dispersion-induced pulse spreading in optical fibers.
Rare Earth-Doped Waveguide Lasers
Graduate Students: G. Vossler and C. Brooks
Professor: K. A. Winick
Sponsors: National Science Foundation;
NSF Center for Ultrafast Optical Science;
Smith Industries;
IMRA America, Inc.
Recently, the development of rare earth-doped fiber lasers
has received considerable attention. These fiber lasers
exhibit a host of desirable properties. First, they permit
wide tuning ranges and short pulse generation because of
their broad emission lines. Second the pump powers required
for lasing are low, since the pump beam is strongly confined
to a small volume. Finally, rare earth-doped lasers offer
better frequency stability, longer lifetimes, and less
temperature sensitivity than semiconductor devices. These
traits make them promising devices for telecommunications,
sensing, and spectroscopic applications. Glass waveguide
lasers on planar substrates are a natural extension of the
fiber technology. As opposed to a fiber, it should be
possible to integrate monolithically multiple components
onto a single glass substrate. These components could
include distributed feedback laser mirrors, grating
couplers, mode-lockers, and nonlinear elements. We have
fabricated neodymium-doped, channel, waveguide lasers in
special glass melts and have demonstrated the first glass
integrated optic distributed Bragg reflector laser. Efforts
are currently under way to passively mode-lock these lasers
and to extend theses results to rare earth-doped lithium
niobate hosts. Novel sensors, based on this technology, are
also under development.
return to
UM
EECS Systems Division
homepage