Memory-based Time Series Detection
In many cases, time series is a sequence of data points, where the
time order is important. Each data point consist of input and
output. The reason that the time order of a time series is important
is that at a certain time instant, the output is not only decided by
the current input, but is also influenced by the delays and the
feedbacks which are some of the previous inputs and outputs. If we
expand the input to contain the delays and the feedbacks as well as
the current input, then the output is fully controlled by the expanded
input. Thus, we can transform a time series into a set of data points
where the time order is no longer important.
Given a time series, a system detector's job is to figure out which
category the underlying generating system of the time series belongs
to. To do so, our method transforms the time series into a set of
expanded data points, then employs a memory-based classifier to
calculate a sequence of probabilities which measure how likely the
data points belong to a certain category, finally uses likelihood
analysis and hypothesis testing to summarize these classification
results. Obviously, our method can handle the detection of non-time
series, too. Compared with other methods, our new system detection is
simple to understand, easy to implement, robust for different types of
systems, adaptive to training data points in memory with different
density and/or noise level. It is capable of distinguishing the
various categories of the underlying system without requesting any
fixed thresholds. It is efficient not only because it can process the
classifications quickly, but also can it focus on the promising
categories and neglect the others from the very beginning. Based on
our empirical evaluation, our method tends to be more accurate than
the other methods.