%\documentstyle[epsf,times,11pt]{article}
\documentstyle[times,11pt]{article}
\topmargin = -.5in
\textheight =  9.0in
\oddsidemargin = 0in
\evensidemargin = 0in
\textwidth = 6.5 in

\title{Parallel Interactive Image Processing\\
A Research Direction}

\author{Peter A. Dinda}

\begin{document}
\maketitle

\section{Introduction}

The widespread popularity of personal computers has made image
processing availble to a wide range of professional and amateur
photographers, illustrators, and designers.  Adobe's Photoshop
application is the most influential and widely used.  Photoshop has
set the standard for how users interact with image processing software
on personal computers.  In large part, this interface parrots that of
the traditional chemical darkroom and designer's toolkit.  The primary
notion of the interface is that the user has a number of tools, each
of which can be applied over arbitrary regions of the image.  Feedback
is immediate.  

Traditional image processing, which is typically more concerned with
throughput than interactivity, has proven to be a fertile ground for
parallel computing.  In contrast, Photoshop-like interactive image
processing remains sequential, except for some specialized DSP
hardware.  However, the image sizes that Photoshop-like programs must
handle is rapidly growing with even low-end scanning services such as
Kodak's PhotoCD generating massive images.  Given the interactive
requirements of these programs, massive amounts of RAM are required.
Noting the argument of Wood and Hill, the cost-up of parallel
processing when large amounts of memory are required is marginal.


\section{Description}


\subsection{Tools}

The primary metaphor of interactive image processing programs such as
Photoshop is that of applying tools to portions of the image.  The
tools are analogous to those one would expect in a darkroom and a
designer's toolbox.  The portion of the image that a tool effects can
be selected in several ways.  For some tools (for example, a brush or
pen), the portion affected is implicitly specified by tool properties
(size) and where the user points.  Other tools work within an arbitrary
clipping region specifed by the user. 

\subsection{Filters}

Many of the tools that work within a clipping region are filters.  
For example, Gaussian blurring is a commonly used filter.   


\subsection{Feedback}

The key attribute of interactive image processing is immediate visual
feedback.  The image is displayed in one or more windows at different
degrees of magnification. The user applies tools, many of which are
the digital analogs of darkroom and designer tools, directly to the
image and expects results in an amount of time comparable to what the
analoguous tool would require.  For example, the user might use a
dodging tool to lighten a portion of the image.  The longer the
dodging tool is held over the image, the lighter the area gets.
Furthermore, the user can move the tool to feather the change into the
surrounding area of the image, just as he would if he were using an
enlarger.


\section{Motivation}

There are several motivations for parallel interactive image
processing.  The primary one is that interactive image processing
involves very large images.  Large images, combined with the demand
for immediate feedback mean massive amounts of memory are required.
Instead of configuring one special machine with this memory, parallel
interactive image processing would allow the memory to be distributed
over many machines, benefiting a larger number of users.  The second
motivation is performance and useability --- parallelism will let
users work interactively with full resolution images that they cannot
now do.  A final motivation is networks of personal computers.  



\subsection{Memory requirements}

The images that are manipulated by professional photographers and designers 
are massive.  In large part this is because photographic film has 
termendous resolution, is used in large formats, and can be scanned at high 
or even full resolution.  When the output media is film, then high 
resolutions are maintained throughout the manipulation process, but even for
non-film output media, images sizes are large.

\subsubsection{Film resolution}
Photographic film has termendous resolution.  Consider the resolution of 
several different kinds of commonly used films:\\
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
 & & Low Contrast & High Contrast \\
Film type & Example & lines/mm & lines/mm\\
\hline
ISO 100, C-41 process color negative & Kodacolor & 63 & 100 \\
ISO 100---160, E-6 process color slide & Ektachrome & 63 & 100 \\
ISO 25---64, K-14 process color slide & Kodachrome & 63 & 100\\
ISO 400 black and white negative & Kodak TMAX & ? & 200\\
\hline
\end{tabular}
\end{center}
Of these, Kodacolor--like films are most commonly used by amateurs, but
are seeing increasing use by professionals.  Ektachrome and Kodachrome
are the most commonly used films for professional photography for
commercial purposes.  

\subsubsection{Film Formats}
Film formats, especially those used in professional and artistic 
photography are large.  Consider the memory requirements necessary for 
capturing all the details of a 100 line/mm slide in each of the following 
film formats.  Note that the Nyquist criteria is applied, so the sample 
rate is 200 dots/mm.\\
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
Film & Typical use & Size & Memory \\
format & of format & in pixels & 4 bytes/pixel \\
\hline
24mm by 36mm & Standard 35mm Format & 4800 by 7200 & 132 MB \\
6cm by 6cm & Portraiture, commercial & 12000 by 12000 &  550 MB \\
6cm by 7cm & Portraiture, commercial & 12000 by 14000 & 640 MB \\
4in by 5in & commerical, artistic & 20320 by 25400 & 1969 MB \\
\hline
\end{tabular}
\end{center}

Each of these film formats (and other, larger formats) can be scanned at 
full resolution with available hardware.  Drum--scanner services, which can 
capture the full resolution of the film for each of the formats described 
above, have long been available from service bureaus.  Although these 
services are high cost, the cost of scanning negatives and slides at high 
resolutions is rapidly declining.  For example, consider Kodak's PhotoCD 
services.  Consumer PhotoCD provides scans of 2048 by 3072 (24 MB per 
image) for 35mm formats at a cost of \$ 1 per scan.  Professional PhotoCD 
provides 4096 by 6144 scans (96 MB) for formats up to 4in by 5in.  When 
higher density CD-Rs become available, inexpensive scans at full 
resolutions will likely follow.

\subsubsection{Non-film media}
One can argue that it is the resolution and size of the output media that 
determines image sizes, not the input media.  If the output media for the 
work is film, then clearly image sizes will be huge.  However,even for 
non-film target media, which have lower resolutions, image sizes are 
massive because the physical formats are larger.  For example, a 300 dpi 
8.5in by 11in magazine advertisement in 4 byte CMYK color still requires 
over 32 MB.  Of course, if producing the advertisement involves montage or 
other multi--image work, this requirement is multiplied by the number of 
images used.

\subsubsection{Memory}

Currently, users manipulate large images on machines specially configured 
for that purpose.  These machines are most often personal computers 
equipped with massive amounts of memory and fast hard disks and running 
common operating systems such as Windows and MacOS.  Although this approach 
works, it concentrates these resources in one machine --- resources that 
cannot be shared when the machine is not being used for image processing.
By distributing this memory resource across several machines of an
organization, we can increase its benefits.  

\subsection{Parallelism for interactivity}

When working with large images, sequential interactive image processing
programs bog down and lose much of their interactivity.  This is a rather
alien effect in the context of the environment these programs attempt 
to emulate -- for example, increasing the contrast of an image is no
slower with a 6cm by 6cm negative than with a 35mm negative in a darkroom,
but is much slower in Photoshop.  

Parallel interactive image processing would exploit the natural parallelism 
of these operations in order to make manipulating large images as 
interactive as smaller ones.  


\subsection{Networks of PCs}

Today's personal computers have performance levels comparable to entry 
level and midrange workstations.  Further, they either have Ethernet built 
in or can be upgraded for a marginal amount.  Higher speed networks are 
finally becoming widely available for these environments.  Soon, there will 
be a sucessor to Ethernet (likely 100 Mbps Ethernet) which will achieve at 
least an order of magnitude improvement in bandwidth at Ethernet-like 
pricepoints.   We argue that parallel interactive image processing is an 
excellent and important application for this inexpensive, widely used 
hardware.

\section{Properties}

Interactive image processing demands immediate feedback to the user
while providing little a priori knowledge of what tools the user
will use or what regions of the image they will be applied to.  


\subsection{Little a priori knowledge}

The most interesting property of interactive image processing is that
there is little a priori knowledge.  In fact, the only a priori
knowledge is the set of tools available.  The user decides in real
time what tool will be applied and what portion of the image will be
affected.

\subsection{Latency constraints important (user interface)}

Despite the lack of a priori knowledge about when operations will be
performed and what regions of the image they will effect, the latency
of operations must be very low in order to preserve interactivity.
Indeed, for some tools, the latency must be low in order to be useful
at all.  For example, the dodging tool explained above is useless if 
its effect cannot be gradually and visually accumulated on the image.

\subsection{Image properties allow tradeoffs}



\subsection{Commercial and common hardware (PCs)}


\section{Research Issues}

\subsection{Scalable design}

\subsection{Parallel language for interactive image processing}

\subsection{Data distribution strategy}

\subsection{Explore tradeoffs}
% image quality, network bw, interface latency, local computation
% total computation and network bw

\subsection{Redundancy/Fault tolerance}
% What properties of image processing can we exploit to achieve
% acheive fault tolerance on commercial hardware and operating systems
% with minimal network traffic, etc.

\section{Conclusion}

\end{document}

