Jia-Yu Pan's Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

VideoCube: a novel tool for video mining and classification

Jia-Yu Pan and Christos Faloutsos. VideoCube: a novel tool for video mining and classification. In Proceedings of the Fifth International Conference on Asian Digital Libraries (ICADL 2002), 2002.
Singapore, December 11-14, 2002

Download

[PDF] [gzipped postscript]

Abstract

We propose a new tool to classify a video clip into one of$n$ given classes (e.g., "news", "commercials", etc).The first novelty of our approach is a method to \textit{automatically} derive a ``vocabulary'' from each class of video clips, using the powerful method of ``Independent ComponentAnalysis'' (ICA).Second, the method is \textit{unified} which works on both video and audio information and gives vocabulary describes not only the still images, but also the motion, as well as the audio part.Furthermore, this vocabulary is \textit{natural} that it is closely related to human perceptual processing.More specifically, every class of video clips gives a listof ``basis functions'', which can compress its members very well.Once we represent video clips in ``vocabularies'', we can do classificationand pattern discovery.For the classification of a video clip, we propose to usecompression: we test which of the ``vocabularies'' can compressthe video clip best, and we assign it to the corresponding class.For data mining, we inspect the basis functionsof each video genre class, and thus we can figure out whether the given classhas, e.g., fast motions/transitions, more harmonic audio, etc. Experiments on real data of 62 news and 43 commercials show that our method achieves overall $\approx$81\% accuracy.

BiBTeX Entry

@InProceedings{icadl2002VideoCube,
  author =	 {Jia-Yu Pan and Christos Faloutsos},
  title =	 {VideoCube: a novel tool for video mining and classification},
  booktitle =	 {Proceedings of the Fifth International Conference on Asian Digital Libraries (ICADL 2002)},
  year =	 2002,
  wwwnote =	 {Singapore, December 11-14, 2002},
  abstract =	 {We propose a new tool to classify a video clip into one of
$n$ given classes (e.g., "news", "commercials", etc).
The first novelty of our approach is a method to \textit{automatically} derive 
a ``vocabulary'' from each class of video clips, 
using the powerful method of ``Independent Component
Analysis'' (ICA).
Second, the method is \textit{unified} which works on both video and audio information and gives vocabulary describes not only the still images, but also the motion, as well as the audio part.
Furthermore, this vocabulary is \textit{natural} that it is closely related to human perceptual processing.
More specifically, every class of video clips gives a list
of ``basis functions'', which can compress its members very well.
Once we represent video clips in ``vocabularies'', we can do classification
and pattern discovery.
For the classification of a video clip, we propose to use
compression: we test which of the ``vocabularies'' can compress
the video clip best, and we assign it to the corresponding class.
For data mining, we inspect the basis functions
of each video genre class, and thus we can figure out whether the given class
has, e.g., fast motions/transitions, more harmonic audio, etc. Experiments on real data of 62 news and 43 commercials show that our method achieves overall $\approx$81\% accuracy.
},
}

Generated by bib2html.pl (written by Patrick Riley) on Wed Aug 28 20:45:45 EDT 2002