Sign In

A subscription to JoVE is required to view this content. Sign in or start your free trial.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Representative Results
  • Discussion
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

The present protocol describes an efficient multi-organ segmentation method called Swin-PSAxialNet, which has achieved excellent accuracy compared to previous segmentation methods. The key steps of this procedure include dataset collection, environment configuration, data preprocessing, model training and comparison, and ablation experiments.

Abstract

Abdominal multi-organ segmentation is one of the most important topics in the field of medical image analysis, and it plays an important role in supporting clinical workflows such as disease diagnosis and treatment planning. In this study, an efficient multi-organ segmentation method called Swin-PSAxialNet based on the nnU-Net architecture is proposed. It was designed specifically for the precise segmentation of 11 abdominal organs in CT images. The proposed network has made the following improvements compared to nnU-Net. Firstly, Space-to-depth (SPD) modules and parameter-shared axial attention (PSAA) feature extraction blocks were introduced, enhancing the capability of 3D image feature extraction. Secondly, a multi-scale image fusion approach was employed to capture detailed information and spatial features, improving the capability of extracting subtle features and edge features. Lastly, a parameter-sharing method was introduced to reduce the model's computational cost and training speed. The proposed network achieves an average Dice coefficient of 0.93342 for the segmentation task involving 11 organs. Experimental results indicate the notable superiority of Swin-PSAxialNet over previous mainstream segmentation methods. The method shows excellent accuracy and low computational costs in segmenting major abdominal organs.

Introduction

Contemporary clinical intervention, including the diagnosis of diseases, the formulation of treatment plans, and the tracking of treatment outcomes, relies on the accurate segmentation of medical images1. However, the complex structural relationships among abdominal organs2make it a challenging task to achieve accurate segmentation of multiple abdominal organs3. Over the past few decades, the flourishing developments in medical imaging and computer vision have presented both new opportunities and challenges in the field of abdominal multi-organ segmentation. Advanced Magnetic Resonance Imaging (MR....

Protocol

The present protocol was approved by the Ethics Committee of Nantong University. It involves the intelligent assessment and research of acquired non-invasive or minimally invasive multimodal data, including human medical images, limb movements, and vascular imaging, utilizing artificial intelligence technology. Figure 3 depicts the overall flowchart of multi-organ segmentation. All the necessary weblinks are provided in the Table of Materials.

Representative Results

This protocol employs two metrics to evaluate the model: Dice Similarity Score (DSC) and 95% Hausdorff Distance (HD95). DSC measures the overlap between voxel segmentation predictions and ground truth, while 95% HD assesses the overlap between voxel segmentation prediction boundaries and ground truth, filtering out 5% of outliers. The definition of DSC26 is as follows:

figure-representative results-449   .......

Discussion

The segmentation of abdominal organs is a complicated work. Compared to other internal structures of the human body, such as the brain or heart, segmenting abdominal organs seems more challenging because of the low contrast and large shape changes in CT images27,28. Swin-PSAxialNet is proposed here to solve this difficult problem.

In the data collection step, this study downloaded 200 images from the AMOS2022 official website

Acknowledgements

This study was supported by the '333' Engineering Project of Jiangsu Province ([2022]21-003), the Wuxi Health Commission General Program (M202205), and the Wuxi Science and Technology Development Fund (Y20212002-1), whose contributions have been invaluable to the success of this work." The authors thank all the research assistants and study participants for their support.

....

Materials

NameCompanyCatalog NumberComments
AMOS2022 datasetNoneNoneDatasets for network training and testing. The weblink is: https://pan.baidu.com/s/1x2ZW5FiZtVap0er55Wk4VQ?pwd=xhpb
ASUS mainframeASUShttps://www.asusparts.eu/en/asus-13020-01910200
CUDA version 11.7NVIDIAhttps://developer.nvidia.com/cuda-11-7-0-download-archive
NVIDIA GeForce RTX 3090NVIDIAhttps://www.nvidia.com/en-in/geforce/graphics-cards/30-series/rtx-3090-3090ti/
Paddlepaddle environment BaiduNoneEnvironmental preparation for network training. The weblink is: https://www.paddlepaddle.org.cn/
PaddleSegBaiduNoneThe baseline we use: https://github.com/PaddlePaddle/PaddleSeg

References

  1. Liu, X., Song, L., Liu, S., Zhang, Y. A review of deep-learning-based medical image segmentation methods. Sustain. 13 (3), 1224 (2021).
  2. Popilock, R., Sandrasagaren, K., Harris, L., Kaser, K. A. CT artifact recognition for the nuclear....

Explore More Articles

Swin PSAxialNetMulti organ SegmentationAbdominal OrgansMedical Image AnalysisNnU Net ArchitectureCT ImagesSpace to depth ModulesParameter shared Axial Attention3D Image Feature ExtractionMulti scale Image FusionDice CoefficientComputational CostTraining SpeedSegmentation Accuracy

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2024 MyJoVE Corporation. All rights reserved