A Filter-Driven Integral Image Generation Method for Scalable Image Resolution
Keywords:
integral image, computer vision, face recognition, object recognition, image resolutionAbstract
Integral images are widely used for computer vision applications such as face detection and object recognition because they can be utilized to speed up the feature computation step. In recent years, there has also been an increasing demand for the use of integral images in high-resolution computer vision applications. However, integral images need a significant amount of memory space since it exploits a large word length to represent the accumulation for filtering operations. There have been studies on the size reduction of the integral images such as word length reduction and partial accumulation methods. However, these approaches were not suited for high-resolution applications because their memory usage increases rapidly following the image resolution. Therefore, in this letter, we present a filter-driven integral image generation method for scalable integral image resolution. The proposed method generates integral images following the filter height which has much smaller dimension than the image resolution that the previous studies used. Consequently, the proposed filter-driven method is less affected by the image resolution of target applications. Evaluation results show the proposed method is scalable up to ultra-high definition (UHD) by reducing the memory usage by 76.4% compared with the state-of-the-art.
Downloads
References
F. C. Crow, “Summed-area tables for texture mapping,” ACM SIGGRAPH Computer Graphics In Proceedings of SIGGRAPH, 1984, 18, (3), pp. 207-212
P. Viola and M. Jones, “Robust real-time object detection,” Inter-national Journal of Computer Vision (IJCV), July, 2001. pp. 1-25
O. David et al., “Real-time GPU-based face detection in HD video sequences,” IEEE Int. Conf. Computer Vision (ICCV), Nov. 2011, pp. 530-537
H. J. W. Belt, “Word length reduction for the integral image,” in Proc. IEEE Int. Conf. Image Processing (ICIP), Oct. 2008, pp. 805-808
J. Kim et al., “Low-cost hardware architecture for integral image generation using word length reduction,” in Proc. International Symposium on Circuits and Systems (ISOCC), Oct. 2020, pp. 119-120
S.-H. Lee and Y.-J. Jeong, “A new integral image structure for memory size reduction,” IEICE Trans. Inf. & Syst., 2014, E97-D, (4), pp. 998-1000
(a)
(b)
Fig 2. Memory usage comparison of the integral image following
(a) the filter size and (b) the image resolution.
Note: filter height equals to filter width.
C. Kumar and S. Agarwal, “A novel architecture for dynamic integral image generation for Haar-based face detection on FPGA,” TENCON 2014-2014 IEEE Region 10 Conference, Bangkok, Oct. 2014, pp. 1-6
D. Kim et al., “Memory-efficient architecture for contrast enhancement and integral image,” Int. Conf. Electron. Inf. Commun (ICEIC), Jan. 2020, pp. 1-4
D. Jeon et al., “An energy efficient full-frame feature extraction accelerator with shift-latch FIFO in 28 nm CMOS,” IEEE J. Solid-State Circutis, 2014, 49, (5), pp. 1271-1284
Downloads
Published
How to Cite
Issue
Section
License
![Creative Commons License](http://i.creativecommons.org/l/by-sa/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.