- 1School of Human Science and Environment, University of Hyogo, Himeji, Japan
- 2Department Biologie II, Ludwig-Maximilians-Universität München, Martinsried, Germany
- 3Department of Earth System Science, Fukuoka University, Fukuoka, Japan
The morphology of a neuron is strongly related to its physiological properties, application of logical product and thus to information processing functions. Optical microscope images are widely used for extracting the structure of neurons. Although several approaches have been proposed to trace and extract complex neuronal structures from microscopy images, available methods remain prone to errors. In this study, we present a practical scheme for processing confocal microscope images and reconstructing neuronal structures. We evaluated this scheme using image data samples and associated “gold standard” reconstructions from the BigNeuron Project. In these samples, dendritic arbors belonging to multiple projection branches of the same neuron overlapped in space, making it difficult to automatically and accurately trace their structural connectivity. Our proposed scheme, which combines several software tools for image masking and filtering with an existing tool for dendritic segmentation and tracing, outperformed state-of-the-art automatic methods in reconstructing such neuron structures. For evaluating our scheme, we applied it to a honeybee local interneuron, DL-Int-1, which has complex arbors and is considered to be a critical neuron for encoding the distance information indicated in the waggle dance of the honeybee.
Introduction
Neuronal morphology is strongly related to neural function; neuron arborization reflects the input and output regions, and morphological characteristics, such as branching pattern, length and thickness, are related to signal transmission properties. Changes of neuronal morphology have been observed in various nervous systems, especially as a consequence of development or experience (Jan and Jan, 2003; Grueber et al., 2005; Yasunaga et al., 2010; Luebke et al., 2015) and difference of brain regions (Jacobs and Scheibel, 2002). For cortical pyramidal neurons, detailed analyses among different brain regions and species relating morphological results of optical and electron microscopy, and response characteristics from electrophysiological experiments have been performed (Elston, 2003; Spruston, 2008; Luebke, 2017). Under such circumstances, neuron segmentation and modeling is an effective way for quantitatively evaluating morphological properties of neurons and can be shared among researchers as a common resource (Halavi et al., 2008).
Neuron morphology is most commonly extracted from image stacks that are obtained by scanning dye-filled neuron in the brain under confocal laser scanning microscopes (CLSM). Various software tools have been developed and applied on neuron images. For light microscopy and confocal imaging data, segmentation tools based on manual operations, such as Neurolucida, NeuroExplorer, have been effectively used for segmentation and parameterization of neuron morphology (Benavides-Piccione et al., 2006; Bianchi et al., 2013; Magliaro et al., 2017). Semi-automatic and automatic segmentation tools were developed and applied on electron microscopy images (Lang et al., 2011; Berning et al., 2015; Jones et al., 2015). To extract and trace neuronal structure from CLSM images, we developed the software SIGEN (Yamasaki et al., 2006; Minemoto et al., 2009) and applied it for segmentation of neurons in the insect brain. This program uses a simple algorithm called single seed distance transform, which works effectively for the segmentation of various types of neurons. We have previously applied SIGEN to reconstruct several neurons in the moth brain, which were collected into a database and were used for neuron morphological analyses and network simulations (Ikeno et al., 2012).
As is well known, automatic segmentation of neuronal structure in CLSM images is one of the most challenging themes in the biological image processing, and several projects encompassing multiple research labs have been undertaken to foster better algorithms (Acciai et al., 2016). The DIADEM challenge (Brown et al., 2011) was a competition held to raise awareness of this problem and spur development of automatic reconstruction algorithms. In this project, a metric program for comparing two digital morphological reconstructions was developed (Gillette et al., 2011) and continues to be a widely used measure. Recently, the BigNeuron project has been conducted to develop automatic segmentation methods from CLSM images (Peng et al., 2015). Although automatic neuronal segmentation and reconstruction has significantly advanced by these efforts, it is still difficult to correctly reconstruct complex structured neurons. Hence, it can be useful to combine automatic segmentation tools with manual operations for effective analysis of neuronal structure.
One of the interesting application for neuronal structure analysis is to investigate of the neural mechanisms for encoding and decoding the spatial information to the profitable flower, indicated by honeybee waggle dance (von Frisch, 1967). Honeybees use a unique temporal pattern of vibration pulses caused by wing beats for informing spatial information to their hive mates. Airborne vibrations are received by sensory neurons located in Johnston’s organ of the antenna of the receivers. These sensory neurons mainly project their neurites into the dorsal lobes (DLs) in the brain (Ai et al., 2007). Among several types of interneurons which are sensitive to vibratory stimuli to the antenna (Ai et al., 2009, 2017), DL-Int-1 is a major local interneuron arborizing in the DL (Ai et al., 2009). This neuron shows tonic inhibitory responses during a train of vibratory stimuli applied to the antenna which has the same temporal structure as that received by receivers during waggle dance, suggesting the DL-Int-1 is one candidate for encoding the distance information to the location of flower indicated by waggle dancer (Ai et al., 2017). The soma is located close to the dorsal central body in the posterior protocerebral lobe (PPL), and a primary neurite extends into the DL. In the DL, two thick branches, called dorsal branch (DB) and ventral branch (VB), emerge from the primary neurite (Ai et al., 2009). These two branches overlap densely in the DL (Ai, 2010), which makes it difficult to separate them by any automatic segmentation process. To make a segmentation of the morphology of this interneuron, we have developed and applied a practical scheme, which combines a manual extraction and mask filtering of neuronal branches with an automatic segmentation by using SIGEN (Minemoto et al., 2009). In this study, we evaluated its performance by applying our new scheme to fruitfly interneurons and honeybee DL-Int-1 and found out that our new scheme are useful in segmentation of the neurons of which several arborizations overlap in the same region.
Material and Methods
Neuronal Segmentation and its Evaluation
Our scheme for segmentation of CLSM image starts with deconvolution by Fiji (RRID: SCR_002285; Figure 1A). We used the “Iterative Deconvolve 3D” plugin along with the point spread function (PSF) that was generated by the “Diffraction PSF 3D” plugin with default parameter values (Dougherty, 2005). This is an effective process for decreasing blurring and distortion in CLSM images. Next, we conducted neuron segmentation by SIGEN (RRID: SCR_016284), which was applied to the deconvolved images. Segmentation results were obtained in SWC format, which is widely used as a standard data format to describe neuronal structure (Ascoli et al., 2007).
Figure 1. (A) Our basic neuron segmentation scheme from a confocal laser scanning microscopes (CLSM) image. The CLSM image was deconvolved by image processing software. Neuronal branches were extracted by the neuron segmentation software SIGEN and stored in SWC format. (B) Our revised segmentation scheme incorporating a process for manually separating overlapping neurites. After the CLSM image was deconvolved by image processing software, masked image data were generated by tracing partial neuronal branches on the deconvolved image data. A partial branch image was obtained by applying AND image operation between the deconvolved image and the mask image. Another part of the neuronal branch image was generated by exclusive-or (EOR) image operation between the deconvolved image and the partial branch image. Neuronal segmentation software was applied on these two branch images independently. Whole neuron structure was reconstructed by combining these two reconstruction results.
To evaluate the performance of our scheme based on SIGEN, we compared segmentation results with those of other available segmentation tools. The BigNeuron project1 provides a huge collection of image data containing singly stained neurons along with corresponding gold standard segmentation results. The gold standard results were manually segmented from image data and were used as reference morphologies when comparing different algorithms. We used 40 samples from the package “p_changed7_janelia_flylight_part1,” which includes neurons of various shapes from fly brain imaged using CLSM, as in our experiments. Parameter values for segmentation by SIGEN were as follows. Only those fragmented segments with volumes larger than 30 voxels were included in the segmentation (VT = 30). Further, fragmented segments whose distance from the main branch, after rounding to the nearest voxel, was less than 30 voxels (DT = 30), were connected to the reconstructed neuron. Segmentation results were evaluated using the DIADEM metric value (Gillette et al., 2011). DIADEM metric was developed and provided by the DIADEM Challenge project for evaluation of neuronal segmentation results by comparison to gold standard reconstructions. The DIADEM metric value ranges from 0 to 1 depending on the degree of matching of several features, such as node positions and branching state, with a value of 1 indicating a perfect matching.
Metric values of segmentation results of an algorithm can vary largely depending on the morphological complexity of input image data. In particular, when multiple neuronal branches spread their dendrites into the same region, it can be extremely difficult to accurately extract their structural connectivity by automatic segmentation. To apply our segmentation software to such complicated neuronal structures, we propose a revised scheme incorporating a process for manually separating overlapping neurites (Figure 1B). In our newly revised segmentation scheme, we traced the neurites manually to create a mask image for separating branches extending in the same region. For manual tracing, we used the 3D segmentation software, ITK-SNAP (Yushkevich et al., 2006; RRID:SCR_002010). The manual tracing operation started from a clearly separated neuronal branch point. Neuronal segments were connected with the branch point and its connected terminals by characteristics of the image, such as direction of intensity changes in a voxel cluster and characteristics of connectivity and bifurcation from the start point to peripherals. Since this process requires a certain level of skill, it causes a possibility of human error. However, the manual tracing of the partial neuronal branches often provides more reliable separation than the automatic tracing. Then, a partial neuronal image containing one of the overlapping branches was obtained by application of logical product (AND) image operation between the manual traced branching image and the deconvoluted neuron image. The image of the other overlapping branch was obtained by exclusive-or (EOR) image operation between the deconvoluted neuron image and the previously obtained partial neuron image. Since the EOR image operation results in the background value (0) when the values of voxels of the two images are the same, an image excluding the branches previously obtained from the original image is obtained. For the case where more neurites are densely concentrated in one region, the same procedure can be recursively applied by separating neurites into individual partial structures at each step.
The whole branching structure was obtained by conjunction of segmentation results of partial neurites. Finally, the segmented neuron structure was stored by the software in SWC and VTK formats. Thus, the results could be displayed and edited by other tools such as Vaa3D (Peng et al., 2010; RRID:SCR_002609), NeuTube (Feng et al., 2015), or ParaView (Ayachit, 2015; RRID:SCR_002516).
Acquiring the CLSM Image of DL-Int-1 and its Segmentation
Neuron images of DL-Int-1 in the honeybee brain were obtained experimentally. The details of intracellular recording and staining procedures were described in Ai et al. (2009, 2017) and we describe them here briefly. Honeybees (Apis mellifera L.) were reared in hives at the Fukuoka University campus. Forager of worker bees, which collected pollen on their hind legs, were caught at the hive entrance and were used in this study. In addition, newly emerged juveniles were isolated in a cage when they hatched from brood cells. Their ages were recorded in days, and those less than 3 days old were used in our experiments. For the experiment, a bee was anesthetized by cooling and was mounted in an acrylic chamber. The bee was fed with 1 M sucrose solution and kept overnight in the dark with high humidity at 20°C. The head of the bee was fixed with wax. The frontal surface of the brain was exposed by cutting away a small rectangular window between the compound eyes. The glands and tracheal sheaths on top of the brain were removed. The mouthparts, including the mandibles, were cut off to expose and remove the esophagus. Small droplets of a honeybee physiological saline were applied to wash away the residue of the esophagus and to enhance electrical contact with a platinum ground electrode placed in the head capsule next to the brain.
Glass electrodes were filled at the tip with 3% Lucifer Yellow CH Dilithium salt (L0529, Sigma-Aldrich, Tokyo, Japan) dissolved in 100 mM KCl, with Dextran, Tetramethylrhodamine, 3000 MW, Anionic, Lysine Fixable (D3308, Thermo Fisher), or with Alexa 647 hydrazide (A20502, Thermo Fisher, Tokyo, Japan) yielding DC resistances in the range of 150–300 MΩ. The electrode was inserted into a region of the DL after the neuronal sheath and a small area of the brain’s neurilemma had been scratched. Electrical signals were amplified with an amplifier (MEZ 8301, Nihon Kohden, Tokyo, Japan) and displayed on an oscilloscope. After identifying DL-Int-1 by their unique response patterns to the vibration stimuli applied to the antenna, the fluorescent dyes were injected iontophoretically into the neuron.
Thereafter, the brains were removed from the head capsule, fixed in 4% paraformaldehyde for 4 h at room temperature, and then rinsed in phosphate buffer solution, dehydrated and cleared in methyl salicylate for subsequent observation. The cleared specimens containing intracellularly stained neurons were viewed from the posterior side of the brain under a CLSM (LSM 510, Carl Zeiss, Jena, Germany) with a Zeiss Plan-Apochromat 25×/0.8 NA oil lens objective (working distance 0.57 mm). Alexa 647 was excited by the 633-nm line of HeNe lasers, Dextran, Tetramethylrhodamine by 543-nm line of HeNe lasers and Lucifer yellow by the 488-nm line of an argon laser. More than 250 optical sections were taken at 1 μm thickness throughout the entire brain depth of each specimen. The image resolution of each section was 1024 × 1024 pixels and each pixel size covered 0.36 μm × 0.36 μm in square.
Results
Evaluation of Semiautomatic Segmentation Scheme
To confirm the performance of our basic segmentation tool SIGEN, we applied it on samples provided by the BigNeuron project2. Neuron images with gold standard segmentation data can be downloaded from the site. We used samples in the subset “p_packed7_janelia_flylight-part1,” of the gold166 package. A neuron with complex structure was reconstructed well by SIGEN, resulting in a DIADEM metric value of 0.850 (Figure 2A). Even from the low contrast image, neuronal structure was reconstructed with relatively high DIADEM metric value of 0.597 (Figure 2B). In contrast, the metric value was relatively low for a neuron with simpler structure (Figure 2C). In this case, although thick and long branches were reconstructed well, SIGEN had difficulty tracing small dense neurites located in the lower right part of the image. However, experts also had difficulty manually tracing these dense arborizations, as seen from the gold standard results. The average value of the DIADEM metric value of SIGEN for 40 samples was 0.717 ± 0.163. Higher average values compared with other software and small values of variance indicate that neuronal morphology can be reconstructed stably by SIGEN (Figure 3). This package contains images of various shapes of insect neurons. Because SIGEN was able to successfully handle such neuron images, the software is considered to be suitable for the reconstruction of neuronal morphology.
Figure 2. Three examples of segmentation by our original software, SIGEN, on samples provided by the BigNeuron project. Top row, 2D projected image data of neurons. Middle row, the manually segmented gold standard results, provided by BigNeuron. Bottom row, segmentation results by SIGEN. The difference between gold standard and automatic segmentation results was measured by the DIADEM metric, whose value ranges from 0 (completely unmatched) to 1 (perfectly matched). A neuron with a complex branching structure was reconstructed well, as shown in sample (A; DIADEM value: 0.850). Even from the low contrast image, neuronal structure was reconstructed with relatively well (B; DIADEM value: 0.597). In contrast, the neuron shown in sample (C) was reconstructed with a low DIADEM metric value of 0.458, due to the presence of many tiny neurites that were difficult to be extracted, which was also seen in the gold standard results.
Figure 3. The average values of the DIADEM metric for 40 samples in the package “p_changed7_janelia_flylight_part1” of BigNeuron. Error bars indicate standard deviation, and the numbers in parentheses at the top of the graph are the number of samples for which no DIADEM value was obtained for the segmentation result. Abbreviations are segmentation software names: SIG, SIGEN; AAu, axis analyzer updated; FSu, fastmarching spanningtree updated; NeuT, neuTube; NTu, neuTube updated; ADV, Advantra; ADu, Advantra updated; STR, smartTracing; LCM, LCMboost; LCMu, LCMboost updated; SNK, snake; XYZ, All-path-pruning; MST, MOST; APP1, All-path-pruning1; APP2, All-path-pruning2; APP3, All-path-pruning3; XY3D, XY_3D_TreMap; GPST, NeuroGPSTree; Tu170, Tubularity model_S MST tracing th170; Tu200, Tubularity model S MST tracing th 200; NST, NeuroStalker; MST, MST Tracing; NCh, NeuronChaser updated.
In the image set used above, some of the samples have multiple neurites mixed in one region. In such cases, all software approaches failed to extract good reconstructions. For example, the neuron “GMR_57C10_AD_01-Two_recombinase_flipouts_A-f-A-20111108_1_C4-right_optic_lobe.v3draw.extract_3” has a primary neurite extending from the cell body with neurites spreading in a certain region (arrow in Figure 4A). Furthermore, a secondary neurite from the dendrite runs along the primary neurite (Figures 4A,B). It is important to grasp the state of extension of such a neuron accurately, and in the gold standard by manual extraction, the primary neurite folded near the secondary neurite was accurately extracted (Figure 4C). However, in the reconstruction generated from our automatic segmentation software SIGEN (Figure 1A), the halfway and terminal points of the neurites were erroneously connected (compare Figures 4C,D yellow arrows). The structure was significantly different from the gold standard, resulting in a DIADEM metric value of 0.488. The wrong interpretation of the neurite form was not limited to SIGEN. The DIADEM metric values for other automatic segmentation software were comparatively low, with a maximum of 0.544 and an average of 0.151. By applying our revised segmentation scheme (Figure 1B), neurites elongating in different directions were separated and extracted, and then synthesized (shown in red and green in Figure 4B). It became possible to accurately extract the major cell structure, as shown by the large improvement of DIADEM metric value from 0.488 to 0.825 (Figure 4E). Therefore, for cases where multiple dendrites extend to the same area, our proposed method is more effective than all compared automatic segmentation methods. It is important to grasp the principal connection state of the neurites in constructing highly accurate morphological models and obtaining the characteristics of neurons. For complex branching structures, it is necessary to acquire knowledge about the connection in this part by a detailed analysis, such as tracing on locally acquired high resolution images. The information can be utilized for separation of major branched and has important meaning for understanding the structure of the whole neuron.
Figure 4. An example of segmentation of a neuron with several arborizations spatially overlapping. (A) Projection image of the neuron. The primary neurite from the soma extends and arborizes in the region indicated by the yellow arrow. A secondary neurite runs parallel with the primary neurite. (B) Two neurites (the soma and the primary neurite are shown in red, and the secondary neurite in green) were separated by manual operation in our revised scheme. (C) The gold standard of segmentation. (D) Segmentation result by our basic scheme (Figure 1A). (E) Segmentation result by our revised scheme (Figure 1B). In the result of our basic scheme in (D) there are segmentation errors on major connecting points of neurites (yellow arrows in D) most of which were rectified in the revised scheme in (E).
Segmentation of a Vibration Sensitive Interneuron in Honeybee Brain
In this study, we focused on the morphology of DL-Int-1 arborizations of DB and VB in the DL (n2 and n3 in Figure 5C, respectively). To obtain the whole arborizing image, separately captured CLSM image stacks were stitched in 3D space using Fiji (Preibisch et al., 2009; Schindelin et al., 2012).
Figure 5. Segmentation of a vibration sensitive interneuron in the honeybee brain, DL-Int-1. (A) The whole morphology of DL-Int-1 in the honeybee brain. The soma (arrowhead) is located below the calyx of mushroom body (MB). A long primary neurite extends ventrally through posterior protocerebral lobe (PPL) and arborizes in the dorsal lobe (DL). (B) Typical unique electrophysiological response pattern of DL-Int-1. Tonic inhibitory response was observed by a continuous 265 Hz vibration stimulus to the antenna. The upper trace shows the action potentials of the DL-Int-1 (bottom trace: vibration stimulus). (C) Arborization of neurites of DL-Int-1 in the dorsal lobe. n1 is the point at which the primary neurite bifurcated into the dorsal branch (DB) and ventral branch (VB), which are difficult to be spatially separated. The DB started from the branching point n2. The VB started from the branching point n3.
A neurite terminal was automatically selected as the start segment (root point) of segmentation by SIGEN. After obtaining the neuronal structure by our segmentation tool, we changed the root point on the branching node of the neurite (n1 in Figure 5C) toward the soma. Shapes of neuronal branches were extracted well, but several connections among the branch segments were incorrect even at the major branching points, which connect to thick branches (white arrow in Figure 6A). To reconstruct the major neuronal structure correctly, we applied our revised segmentation scheme (Figure 1B). We generated a mask image for extracting the VB (Figure 6B) from the whole neuron image. The manually-traced mask image of VB was applied by AND image operation on the deconvoluted image to obtain the VB image (Figure 6C). The DB image was extracted by EOR image operation between the deconvoluted image and the VB image (Figure 6D). Segmentation and reconstruction of neuron morphologies were generated from these VB and DB images independently. The whole neuronal structure was reconstructed by connection of these two reconstructed results (Figure 6E). Although the overall shape of neuronal branches obtained by the automatical (Figure 1A) and our revised schemes (Figure 1B) was similar, connections of branches and distance from the root (start) point were quite different between the methods (Figure 7). As shown in the enlarged view, the thick VB neurite bifurcating into two neurites. One branch extends to the dorsal side and the other extends to the ventral side and branches again. Such a structure could be extracted correctly, as shown in Figure 7B, because the revised scheme had grasped the whole structure in advance. In contrast, it could not be extracted by SIGEN alone, and it was disconnected just after branching (Figure 7A). By our revised segmentation scheme, we obtained reconstruction results from eight foragers (F; Figure 8A) and seven age-controlled juveniles (J; Figure 8B). Detailed analyses based on morphometric features, such as branching patterns, branch segment length and diameter, could be addressed in future research to assess morphological differences between ages.
Figure 6. Reconstruction of neurite arborizations of DL-Int-1 by using our revised segmentation scheme. (A) 2D projection image of DL-Int-1 in the dorsal lobe. The white arrow indicates the point that SIGEN could not extract the correct connection by our basic scheme. (B) 2D projection image of a manually traced 3D mask filter for separation of VB and DB. (C) The extraction result of the VBs obtained by application of the mask filter (B) on the whole arborization image (A). (D) The DBs obtained by the EOR image calculation between whole branching (A) and VBs (C). (E) Skeleton traces of reconstruction overlapped with the original 2D-projected image. Green and red skeleton traces show VB and DB, respectively.
Figure 7. Comparison of distances from the root point on segmentation results by our basic (A) and revised (B) schemes. The distance from root points were indicated by pseudocolors. The enlarged views show locations where the connection is different between by the basic scheme and by our revised scheme (white arrows).
Figure 8. Segmentation results of DL-Int-1 for adult worker. (A) Forager (F). (B) Age-controlled juvenile (J). The following numbers of F and J indicate the sample index number. In (B) the numbers in parenthesis indicate the age in days.
Discussion
We applied the segmentation software SIGEN to CLSM images for reconstructing morphological models of neurons. It achieved high scores on benchmark tests of the BigNeuron samples. Its segmentation results demonstrated its reliability in extracting the major structure of a neuron without human bias. Various approaches to develop automatic and semi-automatic neuron segmentation software are being undertaken in fields including neuromorphometric research and neural computation (Lu et al., 2009; Sümbül et al., 2016; Li et al., 2017).
In this study, we also developed a segmentation scheme for neurons with complex arborizing patterns that performed better than the compared existing approaches. Preprocessing CLSM images was useful to separate the neuron image from the background. The main difficulty in extracting structures like that of DL-Int-1 was caused by multiple arborizations extending and overlapping in the same region. Hence, it was difficult to identify and connect them from fragmented images. We applied state-of-the-art automatic segmentation tools, such as App2 and SmartTracing (Chen et al., 2015), on these neurons. The reconstruction result involved many erroneous branches caused by noise and the background brain image. Although our revised scheme requires a user to conduct manual tracing to apply a mask filter, it produced more accurate segmentation results for structurally complex neurons. Manual tracing can introduce a problem with reproducibility, but at the present time it provides a more accurate result than fully automated techniques. Moreover, in our scheme the manual step is represented by the mask filter, which can be saved as objective documentation of the intervention.
Regarding the extraction of neurites, first, staining method and image quality have a great influence, and it was shown that more accurate morphology can result from images acquired by optical microscopy for specimens treated by DAB (Elston, 2007; Elston and Fujita, 2014). Comparison and evaluation with extraction results from confocal images acquired by confocal laser microscopy or two-photon microscopy are also important. Furthermore, the accuracy of extraction of morphology is highly dependent on the morphology of the neurons. Due to anisotropic resolution (usually the resolution in the Z direction is poor), extraction accuracy is often bad for a specific direction. Even in such a case, it is effective to acquire detailed connection information by manual segmentation, and reflect the result in the extraction of the whole neuron. In this respect, it could be effective to combine manual operation and automatic segmentation.
The DL-Int-1 interneuron is considered to play a key role in the primary auditory center for vibration signal processing. In this study, we have obtained more than 15 reconstructions of DL-Int-1 from foragers and newly emerged juveniles. Future research could apply detailed morphometric analyses to the segmentation results and evaluate the similarity and differences of neuronal morphologies with age.
Conclusion
Automatic or semiautomatic segmentation techniques have become quite useful for morphometric analysis of neurons. In this study, we combined manual masking and filtering with an existing software for semi-automated dendritic segmentation of interneurons with complex structures. The efforts to develop segmentation techniques will benefit from new algorithms for image processing and machine learning in the future. However, this process is ongoing because segmentation results depend on various factors such as neuron form and image quality. Therefore, there is a continuing need to develop better automatic segmentation software tools. Until such a new tool is obtained, we think that it is best to respond flexibly by a combination of automatic segmentation software and manual operation, as proposed here. By using our developed procedure, we could proceed to analyze age- and labor-dependent morphometric change of critical interneuron related with deciphering dance communication in honeybee.
Author Contributions
HA, TW and HI designed this work. KK and HA performed experiments to acquire neuron image data. HI and AK conducted segmentations. HI drafted the manuscript and all authors reviewed and approved the final version of manuscript.
Funding
This research was supported by Ministry of Education, Science, Technology, Sports and Culture of Japan, Grant-in-Aids for Scientific Research (C), grant No. 25330342, 17K00414 and 18K160345 for Challenging Exploratory Research (15K14569), a grant from the Central Research Institute of Fukuoka University (Grant No. 171031) and Strategic International Cooperative Program, Japan Science and Technology Agency (JST) and the Federal Ministry of Education and Research of Germany (BMBF, grant 01GQ1116).
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
We thank Dr. Philipp Rautenberg for his valuable advice on this study and Mr. Sho Iizuka for his efforts on maintaining and revising of our segmentation software.
Footnotes
References
Acciai, L., Soda, P., and Iannello, G. (2016). Automated neuron tracing methods: an updated account. Neuroinformatics 14, 353–367. doi: 10.1007/s12021-016-9310-0
Ai, H. (2010). Vibration-processing interneurons in the honeybee brain. Front. Syst. Neurosci. 3:19. doi: 10.3389/neuro.06.019.2009
Ai, H., Kai, K., Kumaraswamy, A., Ikeno, H., and Wachtler, T. (2017). Interneurons in the honeybee primary auditory center responding to waggle dance-like vibration pulses. J. Neurosci. 37, 10624–10635. doi: 10.1523/JNEUROSCI.0044-17.2017
Ai, H., Nishino, H., and Itoh, T. (2007). Topographic organization of sensory afferents of Johnston’s organ in the honeybee brain. J. Comp. Neurol. 502, 1030–1046. doi: 10.1002/cne.21341
Ai, H., Rybak, J., Menzel, R., and Itoh, T. (2009). Response characteristics of vibration-sensitive interneurons related to Johnston’s organ in the honeybee, Apis mellifera. J. Comp. Neurol. 515, 145–160. doi: 10.1002/cne.22042
Ascoli, G. A., Donohue, D. E., and Halavi, M. (2007). NeuroMorpho.Org: a central resource for neuronal morphologies. J. Neurosci. 27, 9247–9251. doi: 10.1523/JNEUROSCI.2055-07.2007
Ayachit, U. (2015). The ParaView Guide: A Parallel Visualization Application. Carrboro, NC: Kitware Inc. ISBN 978-1930934306.
Benavides-Piccione, R., Hamzei-Sichani, F., Ballesteros-Yáñez, I., DeFelipe, J., and Yuste, R. (2006). Dendritec size of pyramidal neurons differs among mouse cortical regions. Cereb. Cortex 16, 990–1001. doi: 10.1093/cercor/bhj041
Berning, M., Boergens, K. M., and Helmstaedter, M. (2015). SegEM: efficient image analysis for high-resolution connectomics. Neuron 87, 1193–1206. doi: 10.1016/j.neuron.2015.09.003
Bianchi, S., Stimpson, C. D., Duka, T., Larsen, M. D., Janssen, W. G. M., Collins, Z., et al. (2013). Synaptogenesis and development of pyramidal neuron dendritic morphology in the chimpanzee neocortex resembles humans. Proc. Natl. Acad. Sci. U S A 110, 10395–10401. doi: 10.1073/pnas.1301224110
Brown, K. M., Barrionuevo, G., Canty, A. J., De Paola, V., Hirsch, J. A., Jefferis, G. S., et al. (2011). The DIADEM data sets: representative light microscopy images of neuronal morphology to advance automation of digital reconstructions. Neuroinformatics 9, 143–157. doi: 10.1007/s12021-010-9095-5
Chen, H., Xiao, H., Liu, T., and Peng, H. (2015). SmartTracing: self-learning-based Neuron reconstruction. Brain Inform. 2, 135–144. doi: 10.1007/s40708-015-0018-y
Dougherty, R. (2005). “Extensions of DAMAS and benefits and limitations of deconvolution in beamforming,” in 11th AIAA/CEAS Aeroacoustics Conference (26th AIAA Aeroacoustics Conference), (Monterey, CA), 2036–2048.
Elston, G. N. (2003). Cortex, cognition and the cell: new insights into the pyramidal neuron and prefrontal function. Cereb. Cortex 13, 1124–1138. doi: 10.1093/cercor/bhg093
Elston, G. N. (2007). “Specialization of the Neocortical pyramidal cell during primate evolution,” in Evolution of Nervous Systems Volume IV, eds J. H. Kaas and T. D. Preuss (Oxford: Academic Press), 191–242.
Elston, G., and Fujita, I. (2014). Pyramidal cell development: postnatal spinogenesis, dendritic growth, axon growth and electrophysiology. Front. Neuroanat. 8:78. doi: 10.3389/fnana.2014.00078
Feng, L., Zhao, T., and Kim, J. (2015). neuTube 1.0: a new design for efficient neuron reconstruction software based on the SWC format. eNeuro 2:ENEURO.0049-14.2014. doi: 10.1523/eneuro.0049-14.2014
Gillette, T., Brown, K. M., and Ascoli, G. A. (2011). The DIADEM metric: comparing multiple reconstructions of the same neuron. Neuroinformatics 9, 233–245. doi: 10.1007/s12021-011-9117-y
Grueber, W. B., Yang, C. H., Ye, B., and Jan, Y. N. (2005). The development of neuronal morphology in insects. Curr. Biol. 15, 730–738. doi: 10.1016/j.cub.2005.08.023
Halavi, M., Polavaram, S., Donohue, D. E., Hamilton, G., Hoyt, J., Smith, K. P., et al. (2008). NeuroMorpho.Org implementation of digital neuroscience: dense coverage and integration with the NIF. Neuroinformatics 6, 241–252. doi: 10.1007/s12021-008-9030-1
Ikeno, H., Kazawa, T., Namiki, S., Miyamoto, D., Sato, Y., Haupt, S. S., et al. (2012). Development of a scheme and tools to construct a standard moth brain for neural network simulations. Comput. Intell. Neurosci. 2012:795291. doi: 10.1155/2012/795291
Jacobs, B., and Scheibel, A. B. (2002). “Regional dendritic variation in primate cortical pyramidal cells,” in Cortical Areas: Unity and Diversity , eds A. Schüz and R. Miller (London: Taylor and Francis), 111–131.
Jan, Y.-N., and Jan, L.-Y. (2003). The control of dendrite development. Neuron 40, 229–242. doi: 10.1016/s0896-6273(03)00631-7
Jones, C., Liu, T., Cohan, N. W., Ellisman, M., and Tasdizen, T. (2015). Efficient semi-automatic 3D segmentation for neuron tracing in electron microscopy images. J. Neurosci. Methods 246, 13–21. doi: 10.1016/j.jneumeth.2015.03.005
Lang, S., Drouvelis, P., Tafaj, E., Bastian, P., and Sakmann, B. (2011). Fast extraction of neuron morphologies from large-scale SBFSEM image stacks. J. Comp. Neurosci. 31, 533–545. doi: 10.1007/s10827-011-0316-1
Li, R., Zeng, T., Peng, H., and Ji, S. (2017). Deep learning segmentation of optical microscopy images improves 3D neuron reconstruction. IEEE Trans. Med. Imaging 36, 1533–1541. doi: 10.1109/tmi.2017.2679713
Lu, J., Fiala, J. C., and Lichtman, J. W. (2009). Semi-automated reconstruction of neural processes from large numbers of fluorescence images. PLoS One 4:e5655. doi: 10.1371/journal.pone.0005655
Luebke, J. I. (2017). Pyramidal neurons are not generalizable building blocks of cortical networks. Front. Neuroanat. 11:11. doi: 10.3389/fnana.2017.00011
Luebke, J. I., Medalla, M., Amatrudo, J. M., Weaver, C. M., Crimins, J. L., Hunt, B., et al. (2015). Age-related changes to layer 3 pyramidal cells in the rhesus monkey visual cortex. Cereb. Cortex 25, 1454–1468. doi: 10.1093/cercor/bht336
Magliaro, C., Callara, A. L., Vanello, N., and Ahluwalia, A. (2017). A manual segmentation tool for three-dimensional neuron datasets. Front. Neuroinform. 11:36. doi: 10.3389/fninf.2017.00036
Minemoto, T., Saitoh, A., Ikeno, H., Isokawa, T., Kamiura, N., Matsui, N., et al. (2009). “SIGEN: system for reconstructing three-dimensional structure of insect neurons,” in Proceedings of Asia Simulation Conference, CD-ROM (Shiga, Japan), 1–6.
Peng, H., Hawrylycz, M., Roskams, J., Hill, S., Spruston, N., Meijering, E., et al. (2015). BigNeuron: large-scale 3D neuron reconstruction from optical microscopy images. Neuron 87, 252–256. doi: 10.1016/j.neuron.2015.06.036
Peng, H., Ruan, Z., Long, F., Simpson, J. H., and Myers, E. W. (2010). VAA3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets. Nat. Biotechnol. 28, 348–353. doi: 10.1038/nbt.1612
Preibisch, S., Saalfeld, S., and Tomancak, P. (2009). Globally optimal stitching of tiled 3D microscopic image acquisitions. Bioinformatics 25, 1463–1465. doi: 10.1093/bioinformatics/btp184
Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., et al. (2012). Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682. doi: 10.1038/nmeth.2019
Spruston, N. (2008). Pyramidal neurons: dendritic structure and synaptic integration. Nat. Rev. Neurosci. 9, 206–221. doi: 10.1038/nrn2286
Sümbül, U., Roossien, D. Jr., Chen, F., Barry, N., Boyden, E. S., Cai, D., et al. (2016). “Automated scalable segmentation of neurons from multispectral images,” in NIPS 2016, (Barcelona: Spain), 206–221.
Yamasaki, T., Isokawa, T., Matsui, N., Ikeno, H., and Kanzaki, R. (2006). Reconstruction and simulation for three-dimesional morphological structure of insect neurons. Neurocomputing 69, 1043–1047. doi: 10.1016/j.neucom.2005.12.042
Yasunaga, K., Kanamori, T., Morikawa, R., Suzuki, E., and Emoto, K. (2010). Dendrite reshaping of adult Drosophila sensory neurons requires matrix metalloproteinase-mediated modification of the basement membranes. Dev. Cell 18, 621–632. doi: 10.1016/j.devcel.2010.02.010
Keywords: complex neuron segmentation, reconstruction, image processing, insect brain, confocal laser scanning microscopic image
Citation: Ikeno H, Kumaraswamy A, Kai K, Wachtler T and Ai H (2018) A Segmentation Scheme for Complex Neuronal Arbors and Application to Vibration Sensitive Neurons in the Honeybee Brain. Front. Neuroinform. 12:61. doi: 10.3389/fninf.2018.00061
Received: 16 May 2018; Accepted: 29 August 2018;
Published: 26 September 2018.
Edited by:
Hanchuan Peng, Allen Institute for Brain Science, United StatesReviewed by:
Joachim H. R. Lübke, Forschungszentrum Jülich, Helmholtz-Gemeinschaft Deutscher Forschungszentren (HZ), GermanyGuy Elston, Centre for Cognitive Neuroscience, Australia
Copyright © 2018 Ikeno, Kumaraswamy, Kai, Wachtler and Ai. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Hidetoshi Ikeno, [email protected]