A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 11 Issue 7
Jul.  2024

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Q. Zhang, L. Wang, H. Meng, W. Zhang, and  G. Huang,  “A LiDAR point clouds dataset of ships in a maritime environment,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 7, pp. 1681–1694, Jul. 2024. doi: 10.1109/JAS.2024.124275
Citation: Q. Zhang, L. Wang, H. Meng, W. Zhang, and  G. Huang,  “A LiDAR point clouds dataset of ships in a maritime environment,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 7, pp. 1681–1694, Jul. 2024. doi: 10.1109/JAS.2024.124275

A LiDAR Point Clouds Dataset of Ships in a Maritime Environment

doi: 10.1109/JAS.2024.124275
Funds:  This work was supported by the National Natural Science Foundation of China (62173103) and the Fundamental Research Funds for the Central Universities of China (3072022JC0402, 3072022JC0403)
More Information
  • For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore, we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at

    https://github.com/zqy411470859/ship_dataset

    .

     

  • loading
  • [1]
    R. Da Silva Moreira, N. F. F. Ebecken, A. S. Alves, F. Livernet, and A. Campillo-Navetti, “A survey on video detection and tracking of maritime vessels,” IJRRAS, vol. 20, no. 1, pp. 37–50, Jul. 2014.
    [2]
    Z. Sun, X. Hu, Y. Qi, Y. Huang, and S. Li, “MCMOD: The multi-category large-scale dataset for maritime object detection,” Comput. Mater. Continua, vol. 75, no. 1, pp. 1657–1669, Jan. 2023. doi: 10.32604/cmc.2023.036558
    [3]
    L. Patino, T. Cane, A. Vallee, and J. Ferryman, “PETS 2016: Dataset and challenge,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 2016, pp. 1240–1247.
    [4]
    Z. Shao, W. Wu, Z. Wang, W. Du, and C. Li, “SeaShips: A large-scale precisely annotated dataset for ship detection,” IEEE Trans. Multimedia, vol. 20, no. 10, pp. 2593–2604, Oct. 2018. doi: 10.1109/TMM.2018.2865686
    [5]
    M. Kristan, V. S. Kenk, S. Kovačič, and J. Perš, “Fast image-based obstacle detection from unmanned surface vehicles,” IEEE Trans. Cybern., vol. 46, no. 3, pp. 641–654, Mar. 2016. doi: 10.1109/TCYB.2015.2412251
    [6]
    B. Bovcon, R. Mandeljc, J. Perš, and M. Kristan, “Stereo obstacle detection for unmanned surface vehicles by IMU-assisted semantic segmentation,” Rob. Auton. Syst., vol. 104, pp. 1–13, Jun. 2018. doi: 10.1016/j.robot.2018.02.017
    [7]
    B. Bovcon, J. Muhovič, J. Perš, and M. Kristan, “The MaSTr1325 dataset for training deep USV obstacle detection models,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Macau, China, 2019, pp. 3431–3438.
    [8]
    W. Zhang, X. He, W. Li, Z. Zhang, Y. Luo, L. Su, and P. Wang, “An integrated ship segmentation method based on discriminator and extractor,” Image Vis. Comput., vol. 93, p. 103824, Jan. 2020. doi: 10.1016/j.imavis.2019.11.002
    [9]
    Y. Sun, L. Su, Y. Luo, H. Meng, W. Li, Z. Zhang, P. Wang, and W. Zhang, “Global mask R-CNN for marine ship instance segmentation,” Neurocomputing, vol. 480, pp. 257–270, Apr. 2022. doi: 10.1016/j.neucom.2022.01.017
    [10]
    Y. Sun, L. Su, Y. Luo, H. Meng, Z. Zhang, W. Zhang, and S. Yuan, “IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 9, pp. 6029–6043, Sep. 2022. doi: 10.1109/TCSVT.2022.3155182
    [11]
    Y. Zhang, A. Carballo, H. Yang, and K. Takeda, “Perception and sensing for autonomous vehicles under adverse weather conditions: A survey,” arXiv preprint arXiv: 2112.08936, 2023.
    [12]
    A. Patil, S. Malla, H. Gang, and Y.-T. Chen, “The H3D dataset for full-surround 3d multi-object detection and tracking in crowded urban scenes,” in Proc. Int. Conf. Robotics and Automation, Montreal, QC, Canada, 2019, pp. 9552–9557.
    [13]
    A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Providence, RI, USA, 2012, pp. 3354–3361.
    [14]
    A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” Int. J. Rob. Res., vol. 32, no. 11, pp. 1231–1237, Sep. 2013. doi: 10.1177/0278364913491297
    [15]
    H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuScenes: A multimodal dataset for autonomous driving,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 11618–11628.
    [16]
    P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, V. Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon, A. Gao, A. Joshi, Yu. Zhang, J. Shlens, Z. Chen, and D. Anguelov, “Scalability in perception for autonomous driving: Waymo open dataset,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 2443–2451.
    [17]
    Y. Yan, Y. Mao, and B. Li, “Second: Sparsely embedded convolutional detection,” Sensors, vol. 18, no. 10, p. 3337, Oct. 2018. doi: 10.3390/s18103337
    [18]
    S. Shi, X. Wang, and H. Li, “PointRCNN: 3D object proposal generation and detection from point cloud,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 770–779.
    [19]
    A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “PointPillars: Fast encoders for object detection from point clouds,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 12689–12697.
    [20]
    T. Yin, X. Zhou, and P. Krahenbühl, “Center-based 3D object detection and tracking,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021, pp. 11779–11788.
    [21]
    S. Giancola, J. Zarzar, and B. Ghanem, “Leveraging shape completion for 3D Siamese tracking,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 1359–1368.
    [22]
    H. Qi, C. Feng, Z. Cao, F. Zhao, and Y. Xiao, “P2B: Point-to-box network for 3D object tracking in point clouds,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 6328–6337.
    [23]
    Z. Wang, Q. Xie, Y.-K. Lai, J. Wu, K. Long, and J. Wang, “MLVSNet: Multi-level voting Siamese network for 3D visual tracking,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Montreal, QC, Canada, 2021, pp. 3081–3090.
    [24]
    R. Halterman and M. Bruch, “Velodyne HDL-64E lidar for unmanned surface vehicle obstacle detection,” in Proc. SPIE 7692, Unmanned Systems Technology XII, Orlando, FL, USA, 2010, pp. 123–130.
    [25]
    C. Tan, W. Kong, G. Huang, J. Hou, S. Jia, T. Chen, and R. Shu, “Design and demonstration of a novel long-range photon-counting 3D imaging LiDAR with 32 × 32 transceivers,” Remote Sens., vol. 14, no. 12, p. 2851, Jun. 2022. doi: 10.3390/rs14122851
    [26]
    P. An, J. Liang, J. Ma, Y. Chen, L. Wang, Y. Yang, and Q. Liu, “RS-Aug: Improve 3D object detection on liDAR with realistic simulator based data augmentation,” IEEE Trans. Intell. Transp. Syst., vol. 24, no. 9, pp. 10165–10176, Sep. 2023. doi: 10.1109/TITS.2023.3266727
    [27]
    C. Sun, J. M. U. Vianney, Y. Li, L. Chen, L. Li, F.-Y. Wang, A. Khajepour, and D. Cao, “Proximity based automatic data annotation for autonomous driving,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 2, pp. 395–404, Mar. 2020. doi: 10.1109/JAS.2020.1003033
    [28]
    M. Hahner, C. Sakaridis, D. Dai, and L. van Gool, “Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Montreal, QC, Canada, 2021, pp. 15263–15272.
    [29]
    M. Hahner, D. Dai, A. Liniger, and L. van Gool, “Quantifying data augmentation for LiDAR based 3D object detection,” arXiv preprint arXiv: 2004.01643, 2020.
    [30]
    C.-L. Li, M. Zaheer, Y. Zhang, B. Poczos, and R. Salakhutdinov, “Point cloud GAN,” arXiv preprint arXiv: 1810.05795, 2018.
    [31]
    H. Zhang, M. Cissé, Y. N. Dauphin, and D. Lopez-Paz, “Mixup: Beyond empirical risk minimization,” in Proc. 6th Int. Conf. Learning Representations, Vancouver, BC, Canada, 2018.
    [32]
    R. Li, X. Li, P.-A. Heng, and C.-W. Fu, “PointAugment: An auto-augmentation framework for point cloud classification,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 6378–6387.
    [33]
    J. Fang, F. Yan, T. Zhao, F. Zhang, D. Zhou, R. Yang, Y. Ma, and L. Wang, “Simulating LiDAR point cloud for autonomous driving using real-world scenes and traffic flows,” arXiv preprint arXiv: 1811.07112, 2019.
    [34]
    F. Wang, Y. Zhuang, H. Gu, and H. Hu, “Automatic generation of synthetic LiDAR point clouds for 3-D data analysis,” IEEE Trans. Instrum. Meas., vol. 68, no. 7, pp. 2671–2673, Jul. 2019. doi: 10.1109/TIM.2019.2906416
    [35]
    S. Manivasagam, S. Wang, K. Wong, W. Zeng, M. Sazanovich, S. Tan, B. Yang, W.-C. Ma, and R. Urtasun, “LiDARsim: Realistic LiDAR simulation by leveraging the real world,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 11164–11173.
    [36]
    J. Fang, D. Zhou, F. Yan, T. Zhao, F. Zhang, Y. Ma, L. Wang, and R. Yang, “Augmented LiDAR simulator for autonomous driving,” IEEE Rob. Autom. Lett., vol. 5, no. 2, pp. 1931–1938, Apr. 2020. doi: 10.1109/LRA.2020.2969927
    [37]
    A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” in Proc. 1st Annu. Conf. Robot Learning, Mountain View, CA, USA, 2017, pp. 1–16.
    [38]
    A. Xiao, J. Huang, D. Guan, F. Zhan, and S. Lu, “Transfer learning from synthetic to real liDAR point cloud for semantic segmentation,” in Proc. 36th AAAI Conf. Artificial Intelligence, 2022, pp. 2795–2803.
    [39]
    M. Gschwandtner, R. Kwitt, A. Uhl, and W. Pree, “BlenSor: Blender sensor simulation toolbox,” in Proc. 7th Int. Symp. Visual Computing, Las Vegas, NV, USA, 2011, pp. 199–208.
    [40]
    J. Colen and E. B. Kolomeisky, “Kelvin-Froude wake patterns of a traveling pressure disturbance,” Eur. J. Mech. B Fluid., vol. 85, pp. 400–412, Jan–Feb. 2021. doi: 10.1016/j.euromechflu.2020.10.008
    [41]
    D. K. Prasad, D. Rajan, L. Rachmawati, E. Rajabally, and C. Quek, “Video processing from electro-optical sensors for object detection and tracking in a maritime environment: A survey,” IEEE Trans. Intell. Transp. Syst., vol. 18, no. 8, pp. 1993–2016, Apr. 2017. doi: 10.1109/TITS.2016.2634580
    [42]
    R. Ribeiro, G. Cruz, J. Matos, and A. Bernardino, “A data set for airborne maritime surveillance environments,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 9, pp. 2720–2732, Sep. 2019. doi: 10.1109/TCSVT.2017.2775524
    [43]
    A.-J. Gallego, A. Pertusa, and P. Gil, “Automatic ship classification from optical aerial images with convolutional neural networks,” Remote Sens., vol. 10, no. 4, p. 511, Mar. 2018. doi: 10.3390/rs10040511
    [44]
    D. Shu, S. W. Park, and J. Kwon, “3D point cloud generative adversarial network based on tree structured graph convolutions,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 3858–3867.
    [45]
    I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. 27th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2014, pp. 2672–2680.
    [46]
    B. Wang, J. Lan, and J. Gao, “MSG-pOint-GAN: Multi-scale gradient point GAN for point cloud generation,” Symmetry, vol. 15, no. 3, p. 730, Mar. 2023. doi: 10.3390/sym15030730
    [47]
    Y. Chen, V. T. Hu, E. Gavves, T. Mensink, P. Mettes, P. Yang, and C. G. M. Snoek, “Pointmixup: Augmentation for point clouds,” in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 330–345.
    [48]
    J. Zhang, L. Chen, B. Ouyang, B. Liu, J. Zhu, Y. Chen, Y. Meng, and D. Wu, “PointCutMix: Regularization strategy for point cloud classification,” Neurocomputing, vol. 505, pp. 58–67, Sep. 2022. doi: 10.1016/j.neucom.2022.07.049
    [49]
    X.-X. Zhang, Z.-S. Wu, and X. Su, “Influence of breaking waves and wake bubbles on surface-ship wake scattering at low grazing angles,” Chin. Phys. Lett., vol. 35, no. 7, p. 074101, Jul. 2018. doi: 10.1088/0256-307X/35/7/074101
    [50]
    X. Zhang, M. Lewis, W. P. Bissett, B. Johnson, and D. Kohler, “Optical influence of ship wakes,” Appl. Opt., vol. 43, no. 15, pp. 3122–3132, Jun. 2004. doi: 10.1364/AO.43.003122
    [51]
    M. V. Trevorrow, S. Vagle, and D. M. Farmer, “Acoustical measurements of microbubbles within ship wakes,” J. Acoust. Soc. Am., vol. 95, no. 4, pp. 1922–1930, Apr. 1994. doi: 10.1121/1.408706
    [52]
    Y. Zhang and L. Jiang, “A novel data-driven scheme for the ship wake identification on the 2-D dynamic sea surface,” IEEE Access, vol. 8, pp. 69593–69600, Apr. 2020. doi: 10.1109/ACCESS.2020.2986499
    [53]
    M. Xiao, G. Lixin, W. Lu, and L. Juan, “Analysis of the electromagnetic scattering characteristics from the ship-induced kelvin wake on the rough sea surface,” in Proc. Int. Conf. Electromagnetics in Advanced Applications, Verona, Italy, 2017, pp. 1665–1668.
    [54]
    R. Srivastava and J. Christmas, “Analysis of sea waves and ship wake detection,” in Proc. Chennai, Chennai, India, 2022, pp. 1–10.
    [55]
    M. Bijelic, T. Gruber, F. Mannan, F. Kraus, W. Ritter, K. Dietmayer, and F. Heide, “Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 11679–11689.
    [56]
    R. Heinzler, F. Piewak, P. Schindler, and W. Stork, “CNN-based lidar point cloud de-noising in adverse weather,” IEEE Rob. Autom. Lett., vol. 5, no. 2, pp. 2514–2521, Apr. 2020. doi: 10.1109/LRA.2020.2972865
    [57]
    C. Goodin, D. Carruth, M. Doude, and C. Hudson, “Predicting the influence of rain on LiDAR in ADAS,” Electronics, vol. 8, no. 1, p. 89, Jan. 2019. doi: 10.3390/electronics8010089
    [58]
    L. Wang, Z. Zhang, Q. Zhu, and S. Ma, “Ship route planning based on double-cycling genetic algorithm considering ship maneuverability constraint,” IEEE Access, vol. 8, pp. 190746–190759, 2020. doi: 10.1109/ACCESS.2020.3031739
    [59]
    L. Wang, C. Zhou, Z. Zhang, S. Ma, and W. Ma, “Research on dynamic planning method of collision avoidance route based on ship kinematics,” in Proc. 40th Chinese Control Conf., Shanghai, China, 2021, pp. 6178–6183.
    [60]
    Y. Wei, Z. Wu, H. Li, J. Wu, and T. Qu, “Application of periodic structure scattering in kelvin ship wakes detection,” Sustain. Cities Soc., vol. 47, p. 101463, May 2019. doi: 10.1016/j.scs.2019.101463
    [61]
    Y. Zhou and O. Tuzel, “VoxelNet: End-to-end learning for point cloud based 3D object detection,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 4490–4499.
    [62]
    S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proc. 28th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2015, pp. 91–99.
    [63]
    J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object detection via region-based fully convolutional networks,” in Proc. 30th Int. Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 379–387.
    [64]
    R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 580–587.
    [65]
    C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 77–85.
    [66]
    C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space,” in Proc. 31st Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 5099–5108.
    [67]
    M. Kristan, J. Matas, A. Leonardis, T. Vojíř, R. Pflugfelder, G. Fernández, G. Nebehay, F. Porikli, and L. Čehovin, “A novel performance evaluation methodology for single-target trackers,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 11, pp. 2137–2155, Nov. 2016. doi: 10.1109/TPAMI.2016.2516982
    [68]
    J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. Lecun, C. Moore, E. säckinger, and R. Shah, “Signature verification using a ”Siamese” time delay neural network,” Int. J. Pattern Recognit. Artif. Intell., vol. 6, no. 4, pp. 669–688, Aug. 1993.
    [69]
    B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracking Applications. Boston, MA, USA: Artech House, 2004.
    [70]
    C. R. Qi, O. Litany, K. He, and L. Guibas, “Deep Hough voting for 3D object detection in point clouds,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 9276–9285.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(3)

    Article Metrics

    Article views (167) PDF downloads(29) Cited by()

    Highlights

    • We have released the first-ever LiDAR ship point cloud dataset used for ship perception
    • The dataset includes both real-world collected data and simulated data
    • Simulated data models rainy and foggy weather, compensating for the lack of collected data
    • A dynamic wake simulation method in 3D space is proposed to mimic real ship motion scenes
    • Showcase application of using the dataset for ship detection and tracking tasks

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return