Special Session: Semantics-Based Search

SISAP 2021


Computer Vision & Pattern Recognition Multimedia Theoretical Computer Science



Most work in similarity search takes place in a domain far distant from the actual semantics of a space. Given a metric and a set of values, there is a wealth of research in how to efficiently perform search. However almost without exception, the aspect of mapping the results to the “real” closeness semantic the metric is intended to model is ignored. In part, this is often due to the lack of an unbiased ground truth, which is extremely difficult to establish in large collections. Within this context, large scale near-duplicate detection provides a realistic and challenging task on which different techniques can be compared. On one hand, it entails a subtle differentiation between actual near duplicates, and image pairs which are visually similar, but not semantically related. On the other hand, since the number of image pairs grows quadratically with the size of the collection, it requires both effective and computationally efficient search techniques.



MirFlickr1M is a collection of one million images. Collected for research purposes as a benchmark for image tagging, the original selection of images was guided by certain factors, but no checking for similarity was performed at the time. By chance, however, a large number of near-duplicate clusters do occur, along with a small set of identical image clusters. These similar images exist for “natural” reasons. As an example, some are images of the same highly predictable subject and context (for example the moon); some are alterations of others within the collection after cropping, re-hueing etc., some are subsequent shots taken in quick succession from a single camera, etc.



In the MirFlickr Near Duplicate (MFND) dataset, around 10,000 known similar clusters have been identified and checked by the proposers of this Session. There is strong statistical evidence that this is the large majority of all the near-duplicate images within the set. The remainder of the collection therefore contains over 10 11 visually similar, but not semantically related, image pairs. Hence, differentiating actual near-duplicate images from visually similar images provides a subtle and challenging task.



Purpose of the Session



The main purpose of this session is to allow fair and immediate comparisons among different techniques with respect to the well-defined semantic ground truth. Near-duplicate image detection is an important similarity task in its own right. More generally however the provision of a semantic ground truth over a large collection allows comparison of techniques in a domain where scalability is crucial, as judged by the semantic outcome. For example one aspect, usually unmentioned in similarity research, is the importance of false positive results: the number of objects in a collection which are similar to a given query is very unlikely to be uniform, and most research completely fails to address this issue.



We are not proposing a challenge to find the best technique, but only a framework in which different techniques may be meaningfully compared. For example, a very efficient technique with relatively low recall but a low false positive rate may, for some tasks, be more useful than a less scalable one with better recall, or indeed than a faster outcome with more false positives. However any of these three may be most appropriate for a given context.



Research topics that could benefit or be applied to the MFND benchmark include, but are not limited, to:




  • Comparison of novel or existing methods for exact and approximate search strategies

  • Feature extraction for near-duplicate and semantic similarity search

  • Novel distance metrics for semantic similarity search and near-duplicate detection

  • Machine learning methods for semantic similarity search

  • Efficient methods for indexing and range/threshold-based queries



Submissions providing novel insights on the population characteristics of the MFND collection, or existing similar collections, are also welcome.



Further information available at: https://sisap.org/2021/specialsessions.html