A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 8 Issue 2
Feb.  2021

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 6.171, Top 11% (SCI Q1)
    CiteScore: 11.2, Top 5% (Q1)
    Google Scholar h5-index: 51, TOP 8
Turn off MathJax
Article Contents
Long Cheng, Weizhou Liu, Chao Zhou, Yongxiang Zou and Zeng-Guang Hou, "Automated Silicon-Substrate Ultra-Microtome for Automating the Collection of Brain Sections in Array Tomography," IEEE/CAA J. Autom. Sinica, vol. 8, no. 2, pp. 389-401, Feb. 2021. doi: 10.1109/JAS.2021.1003829
Citation: Long Cheng, Weizhou Liu, Chao Zhou, Yongxiang Zou and Zeng-Guang Hou, "Automated Silicon-Substrate Ultra-Microtome for Automating the Collection of Brain Sections in Array Tomography," IEEE/CAA J. Autom. Sinica, vol. 8, no. 2, pp. 389-401, Feb. 2021. doi: 10.1109/JAS.2021.1003829

Automated Silicon-Substrate Ultra-Microtome for Automating the Collection of Brain Sections in Array Tomography

doi: 10.1109/JAS.2021.1003829
Funds:  This work was supported in part by the National Natural Science Foundation of China (61873268, 62025307, U1913209) and the Beijing Natural Science Foundation (JQ19020)
More Information
  • Understanding the structure and working principle of brain neural networks requires three-dimensional reconstruction of brain tissue samples using array tomography method. In order to improve the reconstruction performance, the sequence of brain sections should be collected with silicon wafers for subsequent electron microscopic imaging. However, the current collection of brain sections based on silicon substrate involve mainly manual collection, which requires the involvement of automation techniques to increase collection efficiency. This paper presents the design of an automatic collection device for brain sections. First, a novel mechanism based on circular silicon substrates is proposed for collection of brain sections; second, an automatic collection system based on microscopic object detection and feedback control strategy is proposed. Experimental results verify the function of the proposed collection device. Three objects (brain section, left baffle, right baffle) can be detected from microscopic images by the proposed detection method. Collection efficiency can be further improved with position feedback of brain sections well. It has been experimentally verified that the proposed device can well fulfill the task of automatic collection of brain sections. With the help of the proposed automatic collection device, human operators can be partially liberated from the tedious manual collection process and collection efficiency can be improved.

     

  • loading
  • 1 VGG16 is one popular and effective deep learning method for the image classification. VGG16 includes 16 hidden layers (13 convolution layers and 3 fully connected layers). The size of the convolution kernel ($ \substack {3\times 3} $) and the size of the pooling ($ \substack {2\times 2 }$) are set to be the same for the entire VGG16 network.
  • [1]
    B. He, L. Astolfi, P. Valdés-Sosa, D. Marinazzo, S. Palva, C. G. Bénar, C. Michel, and T. Koenig, “Electrophysiological brain connectivity: theory and implementation,” IEEE Trans. Biomedical Engineering, vol. 66, no. 7, pp. 2115–2137, 2019.
    [2]
    M. M. Poo, J. L. Du, N. Y. Ip, Z. Q. Xiong, B. Xu, and T. Tan, “China brain project: Basic neuroscience, brain diseases, and brain-inspired computing,” Neuron, vol. 92, no. 3, pp. 591–596, 2016. doi: 10.1016/j.neuron.2016.10.050
    [3]
    X. Wang and H. Duan, “Hierarchical visual attention model for saliency detection inspired by avian visual pathways,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 2, pp. 540–552, 2019. doi: 10.1109/JAS.2017.7510664
    [4]
    X. Chen and Y. Wang, “Predicting resting-state functional connectivity with efficient structural connectivity,” IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 6, pp. 1079–1088, 2018. doi: 10.1109/JAS.2017.7510880
    [5]
    L. Fang, Z. Wang, Z. Chen, F. Jian, S. Li, and H. He, “3D shape reconstruction of lumbar vertebra from two x-ray images and a CT model,” IEEE/CAA Journal of Automatica Sinica, vol. 7, no. 4, pp. 1124–1133, 2020. doi: 10.1109/JAS.2019.1911528
    [6]
    W. Denk and H. Horstmann, “Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure,” PLoS Biology, vol. 2, no. 11, Article No. e329, 2004.
    [7]
    K. L. Briggman and D. D. Bock, “Volume electron microscopy for neuronal circuit reconstruction,” Current Opinion in Neurobiology, vol. 22, no. 1, pp. 154–161, 2012. doi: 10.1016/j.conb.2011.10.022
    [8]
    H. Choi, M. K. Jung, and J. Y. Mun, “Current status of automatic serial sections for 3D electron microscopy,” Applied Microscopy, vol. 47, no. 1, pp. 3–7, 2017. doi: 10.9729/AM.2017.47.1.3
    [9]
    H. Wang, Q. Huang, Q. Shi, T. Yue, S. Chen, M. Nakajima, M. Takeuchi, and T. Tukuda, “Automated assembly of vascular-like microtube with repetitive single-step contact manipulation,” IEEE Trans. Biomedical Engineering, vol. 62, no. 11, pp. 2620–2628, 2015. doi: 10.1109/TBME.2015.2437952
    [10]
    H. Horstmann, C. Körber, K. Sätzler, D. Aydin, and T. Kuner, “Serial section scanning electron microscopy (S3EM) on silicon wafers for ultra-structural volume imaging of cells and tissues,” PLoS One, vol. 7, no. 4, Article No. e35172, 2012.
    [11]
    W. Spomer, A. Hofmann, I. Wacker, L. Ness, P. Brey, R. R. Schroder, and U. Gengenbach, “Advanced substrate holder and multi-axis manipulation tool for ultramicrotomy,” Microscopy and Microanalysis, vol. 21, no. S3, pp. 1277–1278, 2015. doi: 10.1017/S1431927615007175
    [12]
    I. Wacker, W. Spomer, A. Hofmann, M. Thaler, S. Hillmer, U. Gengenbach, and R. R. Schröder, “Hierarchical imaging: A new concept for targeted imaging of large volumes from cells to tissues,” BMC Cell Biology, vol. 17, no. 1, pp. 38–50, 2016. doi: 10.1186/s12860-016-0122-8
    [13]
    T. Koike, Y. Kataoka, M. Maeda, Y. Hasebe, Y. Yamaguchi, M. Suga, A. Saito, and H. Yamada, “A device for ribbon collection for array tomography with scanning electron microscopy,” Acta Histochemica et Cytochemica, vol. 50, no. 5, pp. 170–183, 2017.
    [14]
    A. Burel, M. T. Lavault, C. Chevalier, H. Gnaegi, S. Prigent, A. Mucciolo, S. Dutertre, B. M. Humbel, T. Guillaudeux, and I. Kolotuev, “A targeted 3D EM and correlative microscopy method using SEM array tomography,” Development, vol. 145, no. 12, pp. 160–173, 2018.
    [15]
    G. Koestinger, D. During, S. Rickauer, V. Leite, H. Yamahachi, G. Csucs, and R. H. Hahnloser, “Magnetic ultrathin tissue sections for ease of light and electron microscopy,” BioRxiv, Article No. 532549, 2019.
    [16]
    T. J. Lee, A. Kumar, A. H. Balwani, D. Brittain, S. Kinn, C. A. Tovey, E. L. Dyer, N. M. da Costa, R. C. Reid, C. R. Forest, and D. J. Bumbarger, “Large-scale neuroanatomy using LASSO: Loop-based automated serial sectioning operation,” PloS One, vol. 13, no. 10, Article No. e0206172, 2018.
    [17]
    K. Hayworth, N. Kasthuri, R. Schalek, and J. Lichtman, “Automating the collection of ultrathin serial sections for large volume TEM reconstructions,” Microscopy and Microanalysis, vol. 2, no. S02, pp. 86–87, 2006.
    [18]
    K. J. Hayworth, J. L. Morgan, R. Schalek, D. R. Berger, D. G. Hildebrand, and J. W. Lichtman, “Imaging ATUM ultrathin section libraries with WaferMapper: A multi-scale approach to EM reconstruction of neural circuits,” Frontiers in Neural Circuits, vol. 8, no. 6, pp. 68–82, 2014.
    [19]
    A. Eberle, S. Mikula, R. Schalek, J. Lichtman, M. K. Tate, and D. Zeidler, “High-resolution, high-throughput imaging with a multibeam scanning electron microscope,” Journal of Microscopy, vol. 259, no. 2, pp. 114–120, 2015. doi: 10.1111/jmi.12224
    [20]
    H. Zeng and J. R. Sanes, “Neuronal cell-type classification: Challenges, opportunities and the path forward,” Nature Reviews Neuroscience, vol. 18, no. 9, pp. 530–546, 2017. doi: 10.1038/nrn.2017.85
    [21]
    D. G. C. Hildebrand, M. Cicconet, R. M. Torres, W. Choi, T. M. Quan, J. Moon, A. W. Wetzel, A. S. Champion, B. J. Graham, O. Randlett, G. S. Plummer, R. Portugues, I. H. Bianco, S. Saalfeld, A. D. Baden, K. Lillaney, R. Burns, J. T. Vogelstein, A. F. Schier, W. C. A. Lee, W. K. Jeong, J. W. Lichtman, F. Engert, “Whole-brain serial-section electron microscopy in larval zebrafish,” Nature, vol. 545, no. 7654, pp. 345–349, 2017. doi: 10.1038/nature22356
    [22]
    S. J. Smith, “Q&A: Array tomography,” BMC Biology, vol. 16, no. 1, pp. 98–109, 2018. doi: 10.1186/s12915-018-0560-1
    [23]
    M. Dewan, M. Ahmad, and M. Swamy, “Tracking biological cells in time-lapse microscopy: an adaptive technique combining motion and topological features,” IEEE Trans. Biomedical Engineering, vol. 58, no. 6, pp. 1637–1647, 2011. doi: 10.1109/TBME.2011.2109001
    [24]
    X. Chen, X. Zhou, and S. Wong, “Automated segmentation, classification, and tracking of cancer cell nuclei in time-lapse microscopy,” IEEE Trans. Biomedical Engineering, vol. 53, no. 4, pp. 762–766, 2006. doi: 10.1109/TBME.2006.870201
    [25]
    C. Suzuki, J. Gomes, A. Falcao, J. Papa, and S. Hoshino-Shimizu, “Automatic segmentation and classification of human intestinal parasites from microscopy images,” IEEE Trans. Biomedical Engineering, vol. 60, no. 3, pp. 803–812, 2012.
    [26]
    C. Premachandra, D. N. H. Thanh, T. Kimura, and H. Kawanaka, “A study on hovering control of small aerial robot by sensing existing floor features,” IEEE/CAA Journal of Automatica Sinica, vol. 7, no. 4, pp. 1016–1025, 2020. doi: 10.1109/JAS.2020.1003240
    [27]
    Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, “Deep learning for visual understanding: A review,” Neurocomputing, vol. 187, no. 16, pp. 27–48, 2016.
    [28]
    N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. the IEEE Int. Conf. on Computer Vision and Pattern Recognition, IEEE, San Diego, CA, USA, 2005, pp. 886–893.
    [29]
    D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. doi: 10.1023/B:VISI.0000029664.99615.94
    [30]
    T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. doi: 10.1109/TPAMI.2002.1017623
    [31]
    C. Wojek and B. Schiele, “A performance evaluation of single and multi-feature people detection,” in Proc. the Joint Pattern Recognition Symposium, Springer, Berlin, Germany, 2008, pp. 82–91.
    [32]
    P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2010.
    [33]
    Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. doi: 10.1038/nature14539
    [34]
    Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, “Deep learning for visual understanding: A review,” Neurocomputing, vol. 187, no. 9, pp. 27–48, 2016.
    [35]
    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. the Advances in Neural Information Processing Systems, Springer, Lake Tahoe, Nevada, USA, 2012, pp. 1097–1105.
    [36]
    J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, and K. Murphy, “Speed/accuracy trade-offs for modern convolutional object detectors,” in Proc. the IEEE Conf. on Computer Vision and Pattern Recognition, IEEE, Honolulu, Hawaii, USA, 2017, pp. 7310–7311.
    [37]
    R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. the IEEE Conf. on Computer Vision and Pattern Recognition, IEEE, Columbus, Ohio, USA, 2014, pp. 580–587.
    [38]
    R. Girshick, “Fast R-CNN,” in Proc. the IEEE Int. Conf. on Computer Vision, IEEE, Santiago, Chile, 2015, pp. 1440–1448.
    [39]
    S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards realtime object detection with region proposal networks,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017. doi: 10.1109/TPAMI.2016.2577031
    [40]
    J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. the IEEE Conf. on Computer Vision and Pattern Recognition, IEEE, Las Vegas, USA, 2016, pp. 779–788.
    [41]
    W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. the European Conf. on Computer Vision, Springer, Amsterdam, The Netherlands, 2016, pp. 21–37.
    [42]
    J. Hung and A. Carpenter, “Applying Faster R-CNN for object detection on malaria images,” in Proc. the IEEE Conf. on Computer Vision and Pattern Recognition Workshops, IEEE, Honolulu, Hawaii, USA, 2017, pp. 56–61.
    [43]
    Y. C. Lo, C. F. Juang, I. F. Chung, S. N. Guo, M. L. Huang, M. C. Wen, C. J. Lin, and H. Y. Lin, “Glomerulus detection on light microscopic images of renal pathology with the Faster R-CNN,” in Proc. the Int. Conf. on Neural Information Processing, Springer Montreal, Canada, 2018, pp. 369–377.
    [44]
    S. Dong, X. Liu, Y. Lin, T. Arai, and M. Kojima, “Automated tracking system for time lapse observation of C. elegans,” in Proc. the Int. Conf. on Mechatronics and Automation, IEEE, Changchun, China, 2018, pp. 504–509.
    [45]
    W. Liu, L. Cheng, and D. Meng, “Brain slices microscopic detection using simplified ssd with cycle-gan data augmentation,” in Proc. the Int. Conf. on Neural Information Processing, Springer, Siem Reap, Cambodia, 2018, pp. 454–463.
    [46]
    L. Cheng and W. Liu, “An effective microscopic detection method for automated silicon-substrate ultramicrotome (asum),” Neural Processing Letters, to be published, 2019.
    [47]
    M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int. Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, 2010. doi: 10.1007/s11263-009-0275-4
    [48]
    X. Luo, M. Zhou, S. Li, and M. Shang, “An inherently non-negative latent factor model for high-dimensional and sparse matrices from industrial applications,” IEEE Trans. Industrial Informatics, vol. 14, no. 5, pp. 2011–2022, 2018. doi: 10.1109/TII.2017.2766528
    [49]
    X. Luo, M. Zhou, S. Li, Y. Xia, Z. H. You, Q. Zhu, and H. Leung, “Incorporation of efficient second-order solvers into latent factor models for accurate prediction of missing qos data,” IEEE Trans. Cybernetics, vol. 48, no. 4, pp. 1216–1228, 2018. doi: 10.1109/TCYB.2017.2685521
    [50]
    X. Luo, M. Zhou, Y. Xia, Q. Zhu, A. C. Ammari, and A. Alabdulwahab, “Generating highly accurate predictions for missing qos data via aggregating non-negative latent factor models,” IEEE Trans. Neural Networks and Learning Systems, vol. 27, no. 3, pp. 579–592, 2016. doi: 10.1109/TNNLS.2015.2415257
    [51]
    X. Luo, H. Wu, H. Yuan, and M. Zhou, “Temporal pattern-aware qos prediction via biased non-negative latent factorization of tensors,” IEEE Trans. Cybernetics, vol. 50, no. 5, pp. 1798–1809, 2020. doi: 10.1109/TCYB.2019.2903736

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(14)  / Tables(1)

    Article Metrics

    Article views (3530) PDF downloads(62) Cited by()

    Highlights

    • A novel mechanism based on circular silicon substrates is proposed for collection of brain section
    • An automatic collection system based on microscopic object detection and feedback control strategy is proposed
    • With the proposed automatic collection device, human operators can be partially liberated from the tedious manual collection process

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return