Time-Sensitive (Temporal) Web Image Ranking and Retrieval



  • Gunhee Kim and Eric P. Xing
    Time-Sensitive Web Image Ranking and Retrieval via Dynamic Multi-Task Regression
    Sixth ACM International Conference on Web Search and Data Mining (WSDM 2013), Rome, Italy, February 5-8, 2013. (Acceptance = 73/387 ~ 18.9%)
    [Paper (PDF)] [BibTeX] [Presentation (PPTX)] [Poster (PDF)]

Matlab example code

  1. I'm now working on a journal paper of this topic. Probably, I will post the Matlab code after the journal submission.


Motivation of Research

In this paper, we study one additional aspect to improve image search quality that has been largely ignored so far in image retrieval research: temporal dynamics of image collections. With our experiments on more than seven millions of Flickr images, we have found three good reasons of why the discovery of temporal patterns in Web image collections is beneficial to existing image retrieval systems. See an example of the cardinal query in the figure below.

  1. Knowing when search takes place is useful to infer users’ implicit intents. (eg. Some of cardinal queries in summer and winter are likely to be associated with baseball and football, respectively, according to the scheduled seasons of the sports).

  2. Timing suitability can be used as a complementary attribute to relevance. (eg. We can rank higher the images of a cardinal bird in snowy field in winter, but the images of baby cardinals or eggs in summer.)

  3. Temporal information is synergetic in personalized image retrieval.


Figure 1. Overview of time-sensitive Web image retrieval with the query cardinal.
(a)-(b) Top ten images retrieved by Google/Bing and Flickr search engines at 7/31/2012.
(c) The results of our time-sensitive image retrieval for two query time points in winter and summer.
(d) The result of our personalized image retrieval for a designated time and user.


Simply put, our temporal model learning and ranking is based on regularized multi-task regression on multivariate point processes. Please see the details in the paper. Here are some additional important features of our method.

  • Multi-task framework allows multiple image descriptors, each of which characterizes images from a different perspective.

  • Several regularization schemes.

  • Personalization is done offline by locally-weighted learning idea.

  • The learning is performed offline once, and the online query step is very fast. Both processes run in a linear time with most key parameters.

Some examples of normal and personalized image retrieval are as follows.


Figure 2. Image retrieval examples. (a) Normal retrieval for the independence+day at four tq in different months. In each
set, we show five sampled images form top 10 ranked training images in the top row, and their best-matched test images in
the bottom row. We also present the average images of top 100 retrieved training images (left) and their best-matched test
images (right). Independence day scenes in different countries are shown according to query time tq. (b) Personalized
retrieval for the raptor at four different (tq, uq) pairs. Even though the images are associated withthe same keyword, their
contents extremely vary according to users' interests.

Take-home Message

In this paper, we propose an approach for time-sensitive image ranking and retrieval, which can be becoming more important and anticipated, given that the majority of Web photos are now coming from hundreds of millions of general users with different experiences. Our method can automatically learn customized temporal model for each search keyword, and rank the images based on the learned model according to query time and user.


  • This research is supported by NSF IIS-1115313 and AFOSR FA9550010247.