A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 5 Issue 2
Mar.  2018

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Yonglin Tian, Xuan Li, Kunfeng Wang and Fei-Yue Wang, "Training and Testing Object Detectors With Virtual Images," IEEE/CAA J. Autom. Sinica, vol. 5, no. 2, pp. 539-546, Mar. 2018. doi: 10.1109/JAS.2017.7510841
Citation: Yonglin Tian, Xuan Li, Kunfeng Wang and Fei-Yue Wang, "Training and Testing Object Detectors With Virtual Images," IEEE/CAA J. Autom. Sinica, vol. 5, no. 2, pp. 539-546, Mar. 2018. doi: 10.1109/JAS.2017.7510841

Training and Testing Object Detectors With Virtual Images

doi: 10.1109/JAS.2017.7510841
Funds:

the National Natural Science Foundation of China 61533019

the National Natural Science Foundation of China 71232006

More Information
  • In the area of computer vision, deep learning has produced a variety of state-of-the-art models that rely on massive labeled data. However, collecting and annotating images from the real world is too demanding in terms of labor and money investments, and is usually inflexible to build datasets with specific characteristics, such as small area of objects and high occlusion level. Under the framework of Parallel Vision, this paper presents a purposeful way to design artificial scenes and automatically generate virtual images with precise annotations. A virtual dataset named ParallelEye is built, which can be used for several computer vision tasks. Then, by training the DPM (Deformable parts model) and Faster R-CNN detectors, we prove that the performance of models can be significantly improved by combining ParallelEye with publicly available real-world datasets during the training phase. In addition, we investigate the potential of testing the trained models from a specific aspect using intentionally designed virtual datasets, in order to discover the flaws of trained models. From the experimental results, we conclude that our virtual dataset is viable to train and test the object detectors.

     

  • loading
  • [1]
    B. Kaneva, A. Torralba, and W. T. Freeman, "Evaluation of image features using a photorealistic virtual world, " in Proc. 2011 IEEE Int. Conf. Computer Vision, Barcelona, Spain, 2011, pp. 2282-2289. http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=6126508
    [2]
    Y. Q. Liu, K. F. Wang, and D. Y. Shen, "Visual tracking based on dynamic coupled conditional random field model, " IEEE Trans. Intell. Transp. Syst. , vol. 17, no. 3, pp. 822-833, Mar. 2016. http://ieeexplore.ieee.org/document/7307175/
    [3]
    C. Gou, K. F. Wang, Y. J. Yao, and Z. X. Li, "Vehicle license plate recognition based on extremal regions and restricted Boltzmann machines, " IEEE Trans. Intell. Transp. Syst. , vol. 17, no. 4, pp. 1096-1107, Apr. 2016. http://ieeexplore.ieee.org/document/7331292/
    [4]
    Y. T. Liu, K. F. Wang, and F. Y. Wang, "Tracklet association-based visual object tracking: the state of the art and beyond, " Acta Autom. Sinica. , vol. 43, no. 11, pp. 1869-1885, Nov. 2017. http://www.aas.net.cn/EN/Y2017/V43/I11/1869
    [5]
    A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? The KITTI vision benchmark suite, " in Proc. 2012 IEEE Conf. Computer Vision and Pattern Recognition, Providence, RI, USA, 2012, pp. 3354-3361. http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=6248074
    [6]
    M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, "The PASCAL visual object classes (VOC) challenge, " Int. J. Comput. Vis. , vol. 88, no. 2, pp. 303-338, Jun. 2010. doi: 10.1007/s11263-009-0275-4
    [7]
    J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and F. F. Li, "ImageNet: A large-scale hierarchical image database, " in Proc. 2009 IEEE Conf. Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 248-255. http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=5206848
    [8]
    T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft COCO: Common objects in context, " in Proc. 13th European Conf. Computer Vision, Zurich, Switzerland, 2014, pp. 740-755. doi: 10.1007/978-3-319-10602-1_48
    [9]
    K. F. Wang, C. Gou, N. N. Zheng, J. M. Rehg, and F. Y. Wang, "Parallel vision for perception and understanding of complex scenes: methods, framework, and perspectives, " Artif. Intell. Rev. , vol. 48, no. 3, pp. 299-329, Oct. 2017. doi: 10.1007/s10462-017-9569-z
    [10]
    K. F. Wang, C. Gou, and F. Y. Wang, "Parallel vision: an ACP-based approach to intelligent vision computing, " Acta Autom. Sinica, vol. 42, no. 10, pp. 1490-1500, Oct. 2016. http://www.en.cnki.com.cn/Article_en/CJFDTOTAL-MOTO201610003.htm
    [11]
    K. F. Wang, Y. Lu, Y. T. Wang, Z. W. Xiong, and F. Y. Wang, "Parallel imaging: a new theoretical framework for image generation, " Pattern Recognit. Artif. Intell. , vol. 30, no. 7, pp. 577-587, Jul. 2017. http://www.en.cnki.com.cn/Article_en/CJFDTOTAL-MSSB201707001.htm
    [12]
    F. Y. Wang, "Parallel system methods for management and control of complex systems, " Control Decis. , vol. 19, no. 5, pp. 485-489, May 2004. http://en.cnki.com.cn/Article_en/CJFDTotal-KZYC200405001.htm
    [13]
    F. Y. Wang, "Parallel control and management for intelligent transportation systems: concepts, architectures, and applications, " IEEE Trans. Intell. Transp. Syst. , vol. 11, no. 3, pp. 630-638, Sep. 2010. http://ieeexplore.ieee.org/document/5549912/
    [14]
    F. Y. Wang, "Parallel control: a method for data-driven and computational control, " Acta Autom. Sinica, vol. 39, no. 4, pp. 293-302, Apr. 2013. http://en.cnki.com.cn/Article_en/CJFDTOTAL-MOTO201304002.htm
    [15]
    F. Y. Wang, J. J. Zhang, X. H. Zheng, X. Wang, Y. Yuan, X. X. Dai, J. Zhang, and L. Q. Yang, "Where does AlphaGo go: from Church-Turing thesis to AlphaGo thesis and beyond, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 2, pp. 113-120, Apr. 2016. http://www.en.cnki.com.cn/Article_en/CJFDTOTAL-ZDHB201602001.htm
    [16]
    F. Y. Wang, J. Zhang, Q. L. Wei, X. H. Zheng, and L. Li, "PDP: Parallel dynamic programming, " IEEE/CAA J. Autom. Sinica, vol. 4, no. 1, pp. 1-5, Jan. 2017. http://www.en.cnki.com.cn/Article_en/CJFDTOTAL-ZDHB201701001.htm
    [17]
    F. Y. Wang, X. Wang, L. X. Li, and L. Li, "Steps toward parallel intelligence, " IEEE/CAA J. Autom. Sinica, vol. 3, no. 4, pp. 345-348, Oct. 2016. http://www.en.cnki.com.cn/Article_en/CJFDTOTAL-ZDHB201604001.htm
    [18]
    L. Li, Y. L. Lin, N. N. Zheng, and F. Y. Wang, "Parallel learning: a perspective and a framework, " IEEE/CAA J. Autom. Sinica, vol. 4, no. 3, pp. 389-395, Jul. 2017. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=zdhxb-ywb201703001
    [19]
    K. F. Wang and Y. J. Yao, "Video-based vehicle detection approach with data-driven adaptive neuro-fuzzy networks, " Int. J. Pattern Recogn. Artif. Intell. , vol. 29, no. 7, pp. 1555015, Nov. 2015. doi: 10.1142/S0218001415550150
    [20]
    P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, "Object detection with discriminatively trained part-based models, " IEEE Trans. Pattern Anal. Mach. Intell. , vol. 32, no. 9, pp. 1627-1645, Sep. 2010. http://www.ncbi.nlm.nih.gov/pubmed/20634557
    [21]
    S. Q. Ren, K. M. He, R. Girshick, and J. Sun, "Faster R-CNN: towards real-time object detection with region proposal networks, " in Proc. 29th Ann. Conf. Neural Information Processing Systems, Montreal, Canada, 2015, pp. 91-99. http://www.ncbi.nlm.nih.gov/pubmed/27295650
    [22]
    R. Girshick, "Fast R-CNN, " in Proc. IEEE Int. Conf. Computer Vision, Santiago, Chile, 2015, pp. 1440-1448. http://arxiv.org/abs/1504.08083
    [23]
    W. S. Bainbridge, "The scientific research potential of virtual worlds, " Science, vol. 317, no. 5837, pp. 472-476, Jul. 2007. http://www.ncbi.nlm.nih.gov/pubmed/17656715
    [24]
    H. Prendinger, K. Gajananan, A. B. Zaki, A. Fares, R. Molenaar, D. Urbano, H. van Lint, and W. Gomaa, "Tokyo virtual living lab: designing smart cities based on the 3D internet, " IEEE Internet Comput. , vol. 17, no. 6, pp. 30-38, Nov. -Dec. 2013. http://dl.acm.org/citation.cfm?id=2574338
    [25]
    J. Marín, D. Vázquez, D. Gerónimo, and A. M. López, "Learning appearance in virtual scenarios for pedestrian detection, " in Proc. 2010 IEEE Conf. Computer Vision and Pattern Recognition, San Francisco, CA, USA, 2010, pp. 137-144. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5540218
    [26]
    J. L. Xu, D. Vázquez, A. M. López, J. Marín, and D. Ponsa, "Learning a part-based pedestrian detector in a virtual world, " IEEE Trans. Intell. Transp. Syst. , vol. 15, no. 5, pp. 2121-2131, Oct. 2014. http://ieeexplore.ieee.org/document/6786000/
    [27]
    X. C. Peng, B. C. Sun, K. Ali, and K. Saenko, "Learning deep object detectors from 3D models, " in Proc. 2015 IEEE Int. Conf. Computer Vision, Santiago, Chile, 2015, pp. 1278-1286. doi: 10.1109/ICCV.2015.151
    [28]
    B. C. Sun and K. Saenko, "From virtual to reality: fast adaptation of virtual object detectors to real domains, " in Proc. British Machine Vision Conference, Nottingham, 2014, pp. 3. http://www.researchgate.net/publication/273258920_From_Virtual_to_Reality_Fast_Adaptation_of_Virtual_Object_Detectors_to_Real_Domains
    [29]
    S. R. Richter, V. Vineet, S. Roth, and V. Koltun, "Playing for data: ground truth from computer games, " in Proc. 14th European Conf. Computer Vision, Amsterdam, The Netherlands, 2016, pp. 102-118. http://arxiv.org/abs/1608.02192
    [30]
    G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez, "The SYNTHIA dataset: a large collection of synthetic images for semantic segmentation of urban scenes, " in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 3234-3243. doi: 10.1109/CVPR.2016.352
    [31]
    A. Gaidon, Q. Wang, Y. Cabon, and E. Vig, "Virtual worlds as proxy for multi-object tracking analysis, " in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 4340-4349. doi: 10.1109/CVPR.2016.470
    [32]
    I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets, " Proc. 28th Annu. Conf. Neural Information Processing Systems, Montreal, Canada, 2014, pp. 2672-2680. http://dl.acm.org/citation.cfm?id=2969125
    [33]
    K. F. Wang, C. Gou, Y. J. Duan, Y. L. Lin, X. H. Zheng, and F. Y. Wang, "Generative adversarial networks: introduction and outlook, " IEEE/CAA J. Autom. Sinica, vol. 4, no. 4, pp. 588-598, Sep. 2017. http://kns.cnki.net/KCMS/detail/detail.aspx?filename=zdhb201704002&dbname=CJFD&dbcode=CJFQ
    [34]
    K. F. Wang, W. L. Huang, B. Tian, and D. Wen, "Measuring driving behaviors from live video, " IEEE Intell. Syst. , vol. 27, no. 5, pp. 75-80, Sep. -Oct. 2012. doi: 10.1109/MIS.2012.100
    [35]
    K. F. Wang, Y. Q. Liu, C. Gou, and F. Y. Wang, "A multi-view learning approach to foreground detection for traffic surveillance applications, " IEEE Trans. Vehicul. Technol. , vol. 65, no. 6, pp. 4144-4158, Jun. 2016. doi: 10.1109/TVT.2015.2509465
    [36]
    H. Zhang, K. F. Wang, and F. Y. Wang, "Advances and perspectives on applications of deep learning in visual object detection, " Acta Autom. Sinica, vol. 43, no. 8, pp. 1289-1305, Aug. 2017. http://www.en.cnki.com.cn/Article_en/CJFDTOTAL-MOTO201708001.htm
    [37]
    X. Li, K. F. Wang, Y. L. Tian, L. Yan, and F. Y. Wang, "The ParallelEye dataset: Constructing large-scale artificial scenes for traffic vision research, " in Proc. 20th Int. Conf. Intel. Trans. Sys. , Yokohama, Japan, 2017, to be published. http://arxiv.org/abs/1712.08394

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)  / Tables(5)

    Article Metrics

    Article views (1618) PDF downloads(119) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return