Near-duplicate image retrieval based on contextual descriptor

Jinliang Yao, Bing Yang, Qiuming Zhu

Research output: Contribution to journalArticle

17 Citations (Scopus)

Abstract

The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.

Original languageEnglish (US)
Article number6975087
Pages (from-to)1404-1408
Number of pages5
JournalIEEE Signal Processing Letters
Volume22
Issue number9
DOIs
StatePublished - Sep 1 2015

Fingerprint

Image retrieval
Image Retrieval
Descriptors
Local Features
Immediately
Vision
Quantization
Count
Benchmark
Experimental Results
Demonstrate

Keywords

  • Contextual descriptor
  • near-duplicate image retrieval
  • spatial constraint
  • visual word

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Signal Processing
  • Applied Mathematics

Cite this

Near-duplicate image retrieval based on contextual descriptor. / Yao, Jinliang; Yang, Bing; Zhu, Qiuming.

In: IEEE Signal Processing Letters, Vol. 22, No. 9, 6975087, 01.09.2015, p. 1404-1408.

Research output: Contribution to journalArticle

Yao, Jinliang ; Yang, Bing ; Zhu, Qiuming. / Near-duplicate image retrieval based on contextual descriptor. In: IEEE Signal Processing Letters. 2015 ; Vol. 22, No. 9. pp. 1404-1408.
@article{1379aa086ba84b47b03ec41cf912d6bb,
title = "Near-duplicate image retrieval based on contextual descriptor",
abstract = "The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.",
keywords = "Contextual descriptor, near-duplicate image retrieval, spatial constraint, visual word",
author = "Jinliang Yao and Bing Yang and Qiuming Zhu",
year = "2015",
month = "9",
day = "1",
doi = "10.1109/LSP.2014.2377795",
language = "English (US)",
volume = "22",
pages = "1404--1408",
journal = "IEEE Signal Processing Letters",
issn = "1070-9908",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "9",

}

TY - JOUR

T1 - Near-duplicate image retrieval based on contextual descriptor

AU - Yao, Jinliang

AU - Yang, Bing

AU - Zhu, Qiuming

PY - 2015/9/1

Y1 - 2015/9/1

N2 - The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.

AB - The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.

KW - Contextual descriptor

KW - near-duplicate image retrieval

KW - spatial constraint

KW - visual word

UR - http://www.scopus.com/inward/record.url?scp=84924666889&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84924666889&partnerID=8YFLogxK

U2 - 10.1109/LSP.2014.2377795

DO - 10.1109/LSP.2014.2377795

M3 - Article

VL - 22

SP - 1404

EP - 1408

JO - IEEE Signal Processing Letters

JF - IEEE Signal Processing Letters

SN - 1070-9908

IS - 9

M1 - 6975087

ER -