A Context-aware Image Recognition System with Selflocalization in Augmented Reality

  • Ryosuke Suzuki Nagoya Institute of Technology
  • Tadachika Ozono Nagoya Institute of Technology
  • Toramatsu Shintani Nagoya Institute of Technology
Keywords: Context-aware Image Recognition, Augmented Reality, Self-localization, Classification, Object Detection, Mahjong

Abstract

The diffusion of augmented-reality (AR) frameworks has facilitated the implementation of support systems for several real-world tasks. This paper introduces a system that supports Mahjong scoring for beginners. Mahjong is a globally popular strategic board game. Playing Mahjong improves cognitive functions and promotes social interactions. However, it is complex for beginners to accumulate a score according to the combinations of Mahjong tiles. We aim to develop an offline system to tally the score by visually recognizing the Mahjong tiles, which have classes and attributes based on their positional context. This system, therefore, requires a context-aware image recognition. The system recognizes their contextual attributes via self-localization and detects each tile using OpenCV and a convolutional neural network to classify them. The accuracy of detecting tiles and recognizing attributes was good enough to provide an acceptable support system. Our experimental results demonstrate that the system is accurate enough to detect tiles and to recognize attributes. We concluded that the system provides adequate support for the novices.

References

Y. Fujimoto, G. Yamamoto, T. Taketomi, J. Miyazaki, and H. Kato, “Relation between Displaying Features of Augmented Reality and Users Memorization,” TVRSJ, ´ Vol.18, No.1, pp.81-91, 2013.

A. Torralba, K. P. Murphy, W. T. Freeman, and M. A. Rubin, “Context-based vision system for place and object recognition,” Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV), pp.273-280, 2003.

R. Suzuki, T. Ozono, and T. Shintani, “An Offline Mahjong Support System Based on Augmented Reality with Context-aware Image Recognition,” ESKM2019, IEEE, pp.127-132, 2019.

S. Cheng, P. K. Chow, Y. Song, E. C.S. Yu, A. C.M. Chan, T. M.C. Lee, and J. H.M. Lam, “Mental and physical activities delay cognitive decline in older persons with dementia,” The American Journal of Geriatric Psychiatry, Vol.22, Issue 1, pp.63-74, 2014.

R. Suzuki, T. Ozono, and T. Shintani, “Implementing a Real-time Mahjong Hand Recognition System in the Real World Using AR Technologies and Deep Learning,” Tokai-Section Joint Conference on Electrical, Electronics, Information, and Related Engineering, 1p, 2018.

Y. Matsui, H. Sawano, and S. Mizuno, “A Proposal of a Score Calculation System for Mahjong on a Smartphone,” DICOMO 2013, pp.2145-2150, 2013.

D. Nister, Naroditsky, and J. Bergen, “Visual odometry,” Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Vol.1, pp.652-659, 2004.

F. Zhou, H. Been-Lirn Duh, and M. Billinghurst, “Trends in augmented reality tracking. interaction and display: A review of ten years of ISMAR,” 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp.193-202, 2008.

K. Yousif, A. Bab-Hadiashar, and R. Hoseinnezhad, “An Overview to Visual Odometry and Visual SLAM: Applications to Mobile Robotics,” Intelligent Industrial Systems, Vol.1, Issue 4, pp.289-311, 2015.

K. Fukushima, and S. Miyake, “Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position,” Pattern Recognition, Vol.15, No.6, pp.455-469, 1982.

Published
2021-04-19
Section
Technical Papers