7.2K Views
•
10:25 min
•
November 11th, 2022
DOI :
November 11th, 2022
•0:05
Introduction
0:45
Image Import and Preprocessing
2:45
Creating Training Data
5:13
Using the Segmentation Wizard for Iterative Training
6:52
Apply the Network
7:36
Segmentation Manipulation and Cleanup
8:16
Results: Analysis of the Tomogram of DIV 5 Hippocampal Rat Neuron
9:40
Conclusion
Transcript
What's most exciting about our protocol is that it allows any lab to get up and running using these deep learning protocols for segmenting large amounts of cryo-ET data. The main advantage of our workflow is that Dragonfly provides a professional-grade computational environment for training multi-class neural networks for segmentation, and that can really just dramatically speed up your segmentation process. Image segmentation takes time to learn, so make sure to test the tools and learn the key binds to really help streamline your workflow over time.
After setting up the software, proceed with the image import by going to File, and selecting Import Image Files. Then click Add, navigate to the image file, click Open, and select Next, followed by Finish. Create a custom intensity scale by going to Utilities and selecting the Dimension Unit Manager.
Click plus on the lower left to create a new dimension unit. Choose a high and low intensity feature in all the tomograms of interest. Give the unit a name and an abbreviation and save the custom dimension unit.
To calibrate images to the custom intensity scale right click the dataset in the Properties column on the right hand side of the screen and select Calibrate Intensity Scale. Then, go to the main tab on the left side of the screen and scroll down to the probe section. Using the circular probe tool with an appropriate diameter, click a few places in the background region of the tomogram, and record the average number in the raw intensity column.
Repeat for fiducial markers and click Calibrate. If necessary, adjust the contrast to make structures visible again with the area tool in the window leveling section of the main tab. On the left side of the main tab, scroll down to the image processing panel.
Click Advanced and wait for a new window to open. From the Properties panel, select the data set to be filtered and make it visible by clicking the eye icon to the left of the data set. Next, use the dropdown menu from the operations panel to select histogram equalization for the first operation.
Select Add operation, click Gaussian, and change the kernel dimension to 3D. Add the third operation, then select Unsharp and leave the output for this one. Apply the equalization to all slices and let the filtering run, then close the image processing window to return to the main interface.
To identify the training area, go to the data properties panel by first hiding the unfiltered data set by clicking the eye icon to the left of it. Then, show the newly filtered dataset. Using the filtered dataset, identify a subregion of the tomogram that contains all the features of interest.
Now create a box around the region-of-interest by scrolling down to the Shapes category and selecting Create a box on the left side of the main tab. While on the four-view panel, use the different 2D planes to help guide or drag the edges of the box to enclose only the region-of-interest in all dimensions. In the data list, select the box region and change the border color for easier viewing by clicking the gray square next to the eye symbol.
To create a multi-ROI, select the Segmentation tab, click New on the left side, and check Create as multi-ROI. Ensure that the number of classes corresponds to the number of features of interest plus a background class. Name the multi-ROI training data and ensure that the geometry corresponds to the data set before clicking OK.Then, scroll through the data until within the bounds of the boxed region.
Select the Multi-ROI in the Properties menu on the right. Double click the first blank class name in the multi-ROI to name it. To paint with the 2D brush, scroll down to 2D tools in the Segmentation tab on the left and select a circular brush.
Then select Adaptive Gaussian or Local OTSU from the dropdown menu, paint by holding left-control plus click, and erase by holding left-shift plus click. Once all structures have been labeled, right click the background class in the multi-ROI and select Add All Unlabeled Voxels to Class. Next, create a new single-class ROI named Mask.
Ensure the geometry is set to the filtered data set then click Apply. In the Properties tab on the right, right click the box, select Add to ROI, and add it to the mask ROI. To trim the training data using the mask, go to the Properties tab and select both the training data, multi-ROI and the mask ROI by holding control and clicking on each.
Next, click Intersect beneath the data properties list in the Boolean Operations section. Name the new dataset as trimmed training input and ensure the geometry corresponds to the filtered dataset before clicking OK.Import the training data to the Segmentation Wizard by right clicking the filtered dataset in the Properties tab and selecting the Segmentation Wizard option. Once a new window opens, look for the Input tab on the right side.
Click Import Frames From a Multi-ROI and select the Trimmed Training input. To generate a new neural network model, click the plus button on the right side in the model's tab. Select U Net from the list, select 2.5 D and five slices for input dimension, and then click Generate.
To train the network, click Train on the bottom right of the Seg Wiz window. Once the unit training is complete, use the trained network to segment new frames, create a new frame, and click Predict. Then click the up arrow in the upper right of the predicted frame to transfer the segmentation to the real frame.
To correct the prediction, control-click two classes to change the segmented pixels of one to the other. Select both classes and paint with the brush to paint only pixels belonging to either class. Correct the segmentation in at least five new frames.
For iterative training, click the Train button again and allow the network to train further for another 30 to 40 more epics. At this point, stop the training and start another round of training. When satisfied with the network's performance, publish the network by exiting the Segmentation Wizard.
A dialogue box pops up asking which models to publish. Select the successful network, name it, then publish it to make the network available for use outside the Segmentation Wizard. First, select the filtered data set in the Properties panel to apply to the training tomogram.
In the Segmentation panel on the left scroll down to the segment with AI section. Make sure that the correct data set is selected. Choose the model published recently in the dropdown menu then click Segment, followed by All Slices.
To apply to an inference data set, import the new tomogram and pre-process it as demonstrated earlier. In the segmentation panel, go to the segment with AI section. Ensure that the newly filtered tomogram is the data set selected, choose the previously trained model, click the segment, and select All Slices.
Quickly clean up noise by first choosing one of the classes that have segmented noise and the feature-of-interest. Then, right click Process Islands, select Removed by Voxel Count, and then click Select a Voxel Size. Start with small counts and gradually increase the count to remove most of the noise.
For segmentation correction, control-click two classes to paint pixels belonging to only those classes. Control-click plus drag with the segmentation tools to change pixels of the second class to the first and shift-click plus drag to accomplish the opposite. A five-slice unit was trained on a single tomogram and hand-segmentation was performed for training input.
2D segmentation from the unit after complete training is shown. 3D rendering of the segmented regions shows membrane, microtubules, and actin filaments. DIV 5 hippocampal rat neurons from the same session as the training tomogram are shown.
2D segmentation from the unit with no additional training in quick cleanup displayed membrane, microtubules, actin, and fiducials. 3D rendering of the segmented regions was also visualized. DIV 5 hippocampal rat neuron from an earlier session was also observed.
2D segmentation from the unit with quick cleanup and 3D rendering was also performed. DIV 5 hippocampal rat neuron was also observed on a different Titan Krios at a different magnification. Pixel size has been changed with the IMOD program squeeze vault to match the training tomogram.
2D segmentation from the unit with quick cleanup demonstrates robust inference across data sets with proper pre-processing, and 3D rendering of segmentation was performed. Pre-processing the data identically and calibrating the histograms are definitely the most important steps for proper network inference. Once the location of molecules is known within the tomogram, it's really trivial within Dragonfly to calculate quantitative measurements or to calculate coordinates and feed those into a sub-tomogram averaging pipeline.
We're only starting to see the effects of deep learning on cryo-ET image analysis, but this is just one more area where it shows really immense promise.
This is a method for training a multi-slice U-Net for multi-class segmentation of cryo-electron tomograms using a portion of one tomogram as a training input. We describe how to infer this network to other tomograms and how to extract segmentations for further analyses, such as subtomogram averaging and filament tracing.
ABOUT JoVE
Copyright © 2024 MyJoVE Corporation. All rights reserved