Distribution: world
Organization: Gate at Lucky Carrier BBS
Newsgroups: comp.ai.neural-nets
X-FTN-MsgId: 2:463/110.4 2f6888d0
X-FTN-SEEN-BY: 46/128 463/21 68 86 110 4000
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornell!travelers.mail.cornell.edu!news.kei.com!news.mathworks.com!usenet.eel.ufl.edu!spool.mu.edu!howland.reston.ans.net!EU.net!news.eunet.fi!KremlSun!satisfy.kiae.su!carrier.kiev.ua!luckyua!f128.n46.z2!f68.n463.z2!f110.n463.z2!not-for-mail
From: Dmitri_Rachkovskij@p4.f110.n463.z2.fidonet.carrier.kiev.ua (Dmitri Rachkovskij)
Subject: Comparison of classifiers: MNIST data
Message-ID: <2_463/110_4_2f6888d0@fidonet.org>
Date: Thu, 16 Mar 95  18:49:00 +0200
X-Gate: GooGate ver 2.10  Mar 09 1995
Lines: 49

Dear Netters,

Our group at the Institute of Cybernetics, Kiev, Ukraine
works on different aspects of neural networks. One of the directions
of our work is high performance neural network classifiers.

Recently we have developed several of such classifiers. To estimate
their performance, we couldn't use widespread benchmark problems
(such as XOR, double spiral, etc.) because these problems are too
simple to "feel the difference" between good classifiers. So we
have designed test generators that produce complicated data sets
(variable number of features, classes, samples, and complexity of
class boundaries). With these data, we have tested our classifiers
as well as nearest neighbor, potential function, and (enhanced)
backprop classifier and obtained good results (for our classifier).
However such benchmarks are not proper, since WE have chosen the
parameters of NOT OUR classifiers (may be not optimal).

One of the ways from such a situation may be to get results on a
rather complex real world data that are commonly used for benchmarking.
It would be very interesting, though the benchmarking results will be
otained for the data+classifier, and not for the classifier itself.

Regretfully, we here have practically no access to such data and even
to the benchmarking results. However, recently I've read a paper:

Leon Bottou, Corinna Cortes, John S. Denker, Harris Drucker, Isabelle
Guyon, L.D.Jackel, Yann LeCun, Urs A. Muller, Eduard Sackinger, Patrice
Simard, and Vladimir Vapnik.
Comparison of Classifier Methods: A Case Study in Handwritten Digit
Recognition. Proc. of the 12th IAPR International Conference on Pattern
Recognition. Jerusalem, Israel, October 9-13, 1994, vol.2, pp. 77 - 82.

The paper provides the benchmark measurements for several classifier
algorithms on a database extracted at AT&T Bell Laboratories from NIST
handwritten characters database. They call their data as MNIST (Modified
NIST) data.

Could you advise me how to get the MNIST data?
I will appreciate any answer
(really, I don't know can anybody see this letter).

Thank you very much,
Dmitri Rachkovskij
dar@infrm.kiev.ua


--- -[Cyber]-
 * Origin: -[Cyber]- (FidoNet 2:463/110.4)
