Newsgroups: sci.image.processing
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!howland.reston.ans.net!math.ohio-state.edu!uwm.edu!reuter.cse.ogi.edu!news.ssd.intel.com!ornews.intel.com!chnews!ennews!enuxsa.eas.asu.edu!rutledge
From: rutledge@enuxsa.eas.asu.edu (Shawn T. Rutledge)
Subject: un-dithering
Message-ID: <DAMw9s.76E@ennews.eas.asu.edu>
Sender: news@ennews.eas.asu.edu (USENET News System)
Organization: Arizona State University
Date: Fri, 23 Jun 1995 16:25:04 GMT
X-Newsreader: TIN [version 1.2 PL2]
X-Nntp-Posting-Host: enuxsa.eas.asu.edu
Lines: 80

I posted on this a week or so ago, checked the newsgroups a couple times
and saw no response, but I could have missed some, so if you replied before, 
I'm sorry; please next time forward me a copy of the message as well.
Thanks for your help...

Here is the previous message:

Subject: Wanted: descreening algorithm
Newsgroups: sci.image.processing
Summary: 
Keywords: 

Hi.  The last place I worked, we had a high-end Argus (I think) scanner,
and its scanning software had an option to undo halftone dithering, and
restore the original photo.  Since I haven't seen this sort of thing being
done by any common image processing software, like Photoshop, I suspected
at the time that it was hardware-related, ie, maybe the scanner had to 
line up its sample cells with the halftone dots for whatever algorithm
it was to work.  However, thinking about what would be involved, it seems
like it could be done afterwards, provided that the scan resolution is
quite a bit higher than the halftone frequency.  Like, say, you scan 
a newspaper photo that has a 133 lpi halftone frequency, at a resolution
such that for every halftone dot there is a 3 x 3 block of pixels in the
scan (that should be 399 dpi right?).  Suppose for simplicity that it happens
to line up such that when you pick the first 3 x 3 block from the top
left of the scan, the halftone dot is right in the middle.  Now, the
algorithm for dithering is to pick the location for a halftone dot,
average the grey level for all the surrounding pixels, and make the dot's
size proportional to the average grey level.  So we want to detect the
dot size, and it would seem we could only hope to restore the average
grey level for that 3 x 3 block of pixels.  Since the middle pixel
of the 3 x 3 block is all black (since it is centered over the halftone
dot), and the surrounding pixels fall on the edges of the halftone dot,
the pixels on the edge ought to be measured by the scanner as grey levels
linearly proportional to what percentage of the area is blacked out by the 
edge of the halftone dot, and what part falls on clean paper.  So how 
would I then compute the size of the dot, and thus the greyscale of the
original photo?

When I have computed that, I will have one greyscale value for that 3 x 3 
block of pixels.  Next, I was thinking of using some kind of interpolation
to simulate more resolution; ie, interpolate the pixels around the edges
of the block relative to the neighboring blocks.  What kind of interpolation
would be most appropriate?  Is it appropriate at all, or is it true that
the maximum true-to-the-original resolution which can be recovered is
equal to the screen frequency?  Somehow it seemed this scanner was able to
magically restore more resolution than that.  But, if it's all just
interpolated anyway, perhaps i should just scan at 133 DPI, so that the 
greyscale will be proportional to the halftone dot size, apply some
kind of correction for the greyness of the paper and the spreading of the
ink (on the more absorbent papers the halftone dots tend to spread and
touch each other sooner than they ought to, thus limiting the range of
greyscales available towards the black end), and be happy; or, if I want 
more resultion, let Photoshop do the interpolation by simply scaling
up the 133 dpi image.

So basically, I have at my disposal an HPIIcx scanner (with maximum optical
resolution of 400 x 400 dpi, and quite a bit more "interpolated" resolution,
whatever that is), Photoshop, and possibly Khoros.  My scanner driver,
unlike the one for the Arcus, doesn't have a descreening selection built
in, and I want to do it myself somehow.  Any ideas?

-----

Update:  I have learned one trick from Kai's Power Tips & Tricks that sort
of works, but not well enough, and it requires too much judgement on my
part.  If you use Gaussian blur in Photoshop, it smears the dots around
and makes the transition from the center of the halftone dot to the whitespace
between the dots to be more gradual.  If you are lucky enough to have 
solid blocks of color, rather than subtle transitions of shading (as
would be found in a comic strip as opposed to a dithered photograph, 
for example), you can then posterize selectively the range of colors or
greyscales that occur within that block, and repeat for other blocks.
But doing this to photos makes them look posterized.  I need something that
can accurately reproduce lifelike subtle shading as well.
--
  _______                KB7PWD - now on packet!      shawn.rutledge@asu.edu
 (_  | |_)               html: http://enuxsa.eas.asu.edu/~rutledge/home.html
 __) | | \__________________________________________________________________
* cyberspace * capitalism * Khoros * ARS * Interpedia * fusion * techno * 
