This is a method for training a multi-slice U-Net for multi-class segmentation of cryo-electron tomograms using a portion of one tomogram as a training input. We describe how to infer this network to other tomograms and how to extract segmentations for further analyses, such as subtomogram averaging and filament tracing.
Cryo-electron tomography (cryo-ET) allows researchers to image cells in their native, hydrated state at the highest resolution currently possible. The technique has several limitations, however, that make analyzing the data it generates time-intensive and difficult. Hand segmenting a single tomogram can take from hours to days, but a microscope can easily generate 50 or more tomograms a day. Current deep learning segmentation programs for cryo-ET do exist, but are limited to segmenting one structure at a time. Here, multi-slice U-Net convolutional neural networks are trained and applied to automatically segment multiple structures simultaneously within cryo-tomograms. With proper preprocessing, these networks can be robustly inferred to many tomograms without the need for training individual networks for each tomogram. This workflow dramatically improves the speed with which cryo-electron tomograms can be analyzed by cutting segmentation time down to under 30 min in most cases. Further, segmentations can be used to improve the accuracy of filament tracing within a cellular context and to rapidly extract coordinates for subtomogram averaging.
Hardware and software developments in the past decade have resulted in a "resolution revolution" for cryo-electron microscopy (cryo-EM)1,2. With better and faster detectors3, software to automate data collection4,5, and signal boosting advances such as phase plates6, collecting large amounts of high-resolution cryo-EM data is relatively straightforward.
Cryo-ET delivers unprecedented insight into cellular ultrastructure in a native, hydrated state7,8,9,10. The primary limitation is sample thickness, but with the adoption of methods such as focused ion beam (FIB) milling, where thick cellular and tissue samples are thinned for tomography11, the horizon for what can be imaged with cryo-ET is constantly expanding. The newest microscopes are capable of producing well over 50 tomograms a day, and this rate is only projected to increase due to the development of rapid data collection schemes12,13. Analyzing the vast amounts of data produced by cryo-ET remains a bottleneck for this imaging modality.
Quantitative analysis of tomographic information requires that it first be annotated. Traditionally, this requires hand segmentation by an expert, which is time-consuming; depending on the molecular complexity contained within the cryo-tomogram, it can take hours to days of dedicated attention. Artificial neural networks are an appealing solution to this problem since they can be trained to do the bulk of the segmentation work in a fraction of the time. Convolutional neural networks (CNNs) are especially suited to computer vision tasks14 and have recently been adapted for the analysis of cryo-electron tomograms15,16,17.
Traditional CNNs require many thousands of annotated training samples, which is not often possible for biological image analysis tasks. Hence, the U-Net architecture has excelled in this space18 because it relies on data augmentation to successfully train the network, minimizing the dependency on large training sets. For instance, a U-Net architecture can be trained with only a few slices of a single tomogram (four or five slices) and robustly inferred to other tomograms without retraining. This protocol provides a step-by-step guide for training U-Net neural network architectures to segment electron cryo-tomograms within Dragonfly 2022.119.
Dragonfly is commercially developed software used for 3D image segmentation and analysis by deep learning models, and it is freely available for academic use (some geographical restrictions apply). It has an advanced graphical interface that allows a non-expert to take full advantage of the powers of deep learning for both semantic segmentation and image denoising. This protocol demonstrates how to preprocess and annotate cryo-electron tomograms within Dragonfly for training artificial neural networks, which can then be inferred to rapidly segment large datasets. It further discusses and briefly demonstrates how to use segmented data for further analysis such as filament tracing and coordinate extraction for sub-tomogram averaging.
NOTE: Dragonfly 2022.1 requires a high-performance workstation. System recommendations are included in the Table of Materials along with the hardware of the workstation used for this protocol. All tomograms used in this protocol are binned 4x from a pixel size of 3.3 to 13.2 ang/pix. Samples used in the representative results were obtained from a company (see the Table of Materials) that follows animal care guidelines that align to this institution's ethical standards. The tomogram used in this protocol and the multi-ROI that was generated as training input have been included as a bundled dataset in Supplemental File 1 (which can be found at https://datadryad.org/stash/dataset/doi:10.5061/dryad.rxwdbrvct) so the user can follow along with the same data should they wish to. Dragonfly also hosts an open access database called the Infinite Toolbox where users can share trained networks.Â
1. Setup
2. Image import
3. Preprocessing (Figure 1.1)
4. Create training data (Figure 1.2)
5. Using the segmentation wizard for iterative training (Figure 1.3)
6. Apply the network (Figure 1.4)
7. Segmentation manipulation and cleanup
8. Generating coordinates for sub-tomogram averaging from the ROI
9. Watershed transform
Figure 1: Workflow. 1) Preprocess the training tomogram by calibrating the intensity scale and filtering the dataset. 2) Create the training data by hand-segmenting a small portion of a tomogram with all appropriate labels the user wishes to identify. 3) Using the filtered tomogram as the input and the hand segmentation as the training output, a five-layer, multi-slice U-Net is trained in the segmentation wizard. 4) The trained network can be applied to the full tomogram to annotate it and a 3D rendering can be generated from each segmented class. Please click here to view a larger version of this figure.
Following the protocol, a five-slice U-Net was trained on a single tomogram (Figure 2A) to identify five classes: Membrane, Microtubules, Actin, Fiducial markers, and Background. The network was iteratively trained a total of three times, and then applied to the tomogram to fully segment and annotate it (Figure 2B,C). Minimal cleanup was performed using steps 7.1 and 7.2. The next three tomograms of interest (Figure 2D,G,J) were loaded into the software for preprocessing. Prior to image import, one of the tomograms (Figure 2J) required pixel size adjustment from 17.22 Ã…/px to 13.3 Ã…/px as it was collected on a different microscope at a slightly different magnification. The IMOD program squeezevol was used for resizing with the following command:
'squeezevol -f 0.772 inputfile.mrc outputfile.mrc'
In this command, -f refers to the factor by which to alter the pixel size (in this case: 13.3/17.22). After import, all three inference targets were preprocessed according to steps 3.2 and 3.3, and then the five-slice U-Net was applied. Minimal cleanup was again performed. The final segmentations are displayed in Figure 2.
Microtubule segmentations from each tomogram were exported as binary (step 7.4) TIF files, converted to MRC (IMOD tif2mrc program), and then used for cylinder correlation and filament tracing. Binary segmentations of filaments result in much more robust filament tracing than tracing over tomograms. Coordinate maps from filament tracing (Figure 3) will be used for further analysis, such as nearest neighbor measurements (filament packing) and helical sub-tomogram averaging along single filaments to determine microtubule orientation.
Unsuccessful or inadequately trained networks are easy to determine. A failed network will be unable to segment any structures at all, whereas an inadequately trained network typically will segment some structures correctly and have a significant number of false positives and false negatives. These networks can be corrected and iteratively trained to improve their performance. The segmentation wizard automatically calculates a model's Dice similarity coefficient (called score in the SegWiz) after it is trained. This statistic gives an estimate of the similarity between the training data and the U-Net segmentation. Dragonfly 2022.1 also has a built-in tool to evaluate a model's performance that can be accessed in the Artificial Intelligence tab at the top of the interface (see documentation for usage).
Figure 2: Inference. (A-C) Original training tomogram of a DIV 5 hippocampal rat neuron, collected in 2019 on a Titan Krios. This is a backprojected reconstruction with CTF correction in IMOD. (A) The yellow box represents the region where hand segmentation was performed for training input. (B) 2D segmentation from the U-Net after training is complete. (C) 3D rendering of the segmented regions showing membrane (blue), microtubules (green), and actin (red). (D-F) DIV 5 hippocampal rat neuron from the same session as the training tomogram. (E) 2D segmentation from the U-Net with no additional training and quick cleanup. Membrane (blue), microtubules (green), actin (red), fiducials (pink). (F) 3D rendering of the segmented regions. (G-I)Â DIV 5 hippocampal rat neuron from the 2019 session. (H) 2D segmentation from the U-Net with quick cleanup and (I) 3D rendering. (J-L) DIV 5 hippocampal rat neuron, collected in 2021 on a different Titan Krios at a different magnification. Pixel size has been changed with the IMOD program squeezevol to match the training tomogram. (K) 2D segmentation from the U-Net with quick cleanup, demonstrating robust inference across datasets with proper preprocessing and (L) 3D rendering of segmentation. Scale bars = 100 nm. Abbreviations: DIV = days in vitro; CTF = contrast transfer function. Please click here to view a larger version of this figure.
Figure 3: Filament tracing improvement. (A) Tomogram of a DIV 4 rat hippocampal neuron, collected on a Titan Krios. (B)Â Correlation map generated from cylinder correlation over actin filaments. (C)Â Filament tracing of actin using the intensities of the actin filaments in the correlation map to define parameters. Tracing captures the membrane and microtubules, as well as noise, while trying to trace just actin. (D)Â U-Net segmentation of tomogram. Membrane highlighted in blue, microtubules in red, ribosomes in orange, triC in purple, and actin in green. (E)Â Actin segmentation extracted as a binary mask for filament tracing. (F)Â Correlation map generated from cylinder correlation with the same parameters from (B). (G) Significantly improved filament tracing of just actin filaments from the tomogram. Abbreviation: DIV = days in vitro. Please click here to view a larger version of this figure.
Supplemental File 1: The tomogram used in this protocol and the multi-ROI that was generated as training input are included as a bundled dataset (Training.ORSObject). See https://datadryad.org/stash/dataset/doi:10.5061/dryad.rxwdbrvct.
This protocol lays out a procedure for using Dragonfly 2022.1 software to train a multi-class U-Net from a single tomogram, and how to infer that network to other tomograms that do not need to be from the same dataset. Training is relatively quick (can be as fast as 3-5 min per epoch or as slow as a few hours, depending entirely on the network that is being trained and the hardware used), and retraining a network to improve its learning is intuitive. As long as the preprocessing steps are carried out for every tomogram, inference is typically robust.
Consistent preprocessing is the most critical step for deep learning inference. There are many imaging filters in the software and the user can experiment to determine which filters work best for particular datasets; note that whatever filtering is used on the training tomogram must be applied in the same way to the inference tomograms. Care must also be taken to provide the network with accurate and sufficient training information. It is vital that all features segmented within the training slices are segmented as carefully and precisely as possible.
Image segmentation is facilitated by a sophisticated commercial-grade user interface. It provides all the necessary tools for hand segmentation and allows for the simple reassignment of voxels from any one class into another prior to training and retraining. The user is allowed to hand-segment voxels within the whole context of the tomogram, and they are given multiple views and the ability to rotate the volume freely. Additionally, the software provides the ability to use multi-class networks, which tend to perform better16 and are faster than segmenting with multiple single-class networks.
There are, of course, limitations to a neural network's capabilities. Cryo-ET data are, by nature, very noisy and limited in angular sampling, which leads to orientation-specific distortions in identical objects21. Training relies on an expert to hand-segment structures accurately, and a successful network is only as good (or as bad) as the training data it is given. Image filtering to boost signal is helpful for the trainer, but there are still many cases where accurately identifying all pixels of a given structure is difficult. It is, therefore, important that great care is taken when creating the training segmentation so that the network has the best information possible to learn during training.
This workflow can be easily modified for each user's preference. While it is essential that all tomograms be preprocessed in exactly the same manner, it is not necessary to use the exact filters used in the protocol. The software has numerous image filtering options, and it is recommended to optimize these for the user's particular data before setting out on a large segmentation project spanning many tomograms. There are also quite a few network architectures available to use: a multi-slice U-Net has been found to work best for the data from this lab, but another user might find that another architecture (such as a 3D U-Net or a Sensor 3D) works better. The segmentation wizard provides a convenient interface for comparing the performance of multiple networks using the same training data.
Tools like the ones presented here will make hand segmentation of full tomograms a task of the past. With well-trained neural networks that are robustly inferable, it is entirely feasible to create a workflow where tomographic data is reconstructed, processed, and fully segmented as quickly as the microscope can collect it.
This study was supported by the Penn State College of Medicine and the Department of Biochemistry and Molecular Biology, as well as Tobacco Settlement Fund (TSF) grant 4100079742-EXT. The CryoEM and CryoET Core (RRID:SCR_021178) services and instruments used in this project were funded, in part, by the Pennsylvania State University College of Medicine via the Office of the Vice Dean of Research and Graduate Students and the Pennsylvania Department of Health using Tobacco Settlement Funds (CURE). The content is solely the responsibility of the authors and does not necessarily represent the official views of the University or College of Medicine. The Pennsylvania Department of Health specifically disclaims responsibility for any analyses, interpretations, or conclusions.
Name | Company | Catalog Number | Comments |
Dragonfly 2022.1 | Object Research Systems | https://www.theobjects.com/dragonfly/index.html | |
E18 Rat Dissociated Hippocampus | Transnetyx Tissue | KTSDEDHP | https://tissue.transnetyx.com/faqs |
IMOD | University of Colorado | https://bio3d.colorado.edu/imod/ | |
Intel® Xeon® Gold 6124 CPU 3.2GHz | Intel | https://www.intel.com/content/www/us/en/products/sku/120493/intel-xeon-gold-6134-processor-24-75m-cache-3-20-ghz/specifications.html | |
NVIDIA Quadro P4000 | NVIDIA | https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/productspage/quadro/quadro-desktop/quadro-pascal-p4000-data-sheet-a4-nvidia-704358-r2-web.pdf | |
Windows 10 Enterprise 2016 | Microsoft | https://www.microsoft.com/en-us/evalcenter/evaluate-windows-10-enterprise | |
Workstation Minimum Requirements | https://theobjects.com/dragonfly/system-requirements.html |
This article has been published
Video Coming Soon
ABOUT JoVE
Copyright © 2024 MyJoVE Corporation. All rights reserved