How a General-Purpose Commonsense Ontology can Improve Performance of Learning-Based Image Retrieval
How a General-Purpose Commonsense Ontology can Improve Performance of Learning-Based Image Retrieval
Rodrigo Toro Icarte, Jorge A. Baier, Cristian Ruz, Alvaro Soto
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 1283-1289.
https://doi.org/10.24963/ijcai.2017/178
The knowledge representation community has built general-purpose ontologies which contain large amounts of commonsense knowledge over relevant aspects of the world, including useful visual information, e.g.: "a ball is used by a football player", "a tennis player is located at a tennis court". Current state-of-the-art approaches for visual recognition do not exploit these rule-based knowledge sources. Instead, they learn recognition models directly from training examples. In this paper, we study how general-purpose ontologies—specifically, MIT's ConceptNet ontology—can improve the performance of state-of-the-art vision systems. As a testbed, we tackle the problem of sentence-based image retrieval. Our retrieval approach incorporates knowledge from ConceptNet on top of a large pool of object detectors derived from a deep learning technique. In our experiments, we show that ConceptNet can improve performance on a common benchmark dataset. Key to our performance is the use of the ESPGAME dataset to select visually relevant relations from ConceptNet. Consequently, a main conclusion of this work is that general-purpose commonsense ontologies improve performance on visual reasoning tasks when properly filtered to select meaningful visual relations.
Keywords:
Knowledge Representation, Reasoning, and Logic: Common-Sense Reasoning
Machine Learning: Knowledge-based Learning
Robotics and Vision: Vision and Perception