The protocol has fills the complex CT and MRI images of human anatomical structures, leveraging the respectively advantage of both type of imaging. This is a significant innovation in the field of medical imaging. In the fused model, doctors can view both bone structure from CT and softer tissue structures from MRI.
Additionally, the 3D model can be used for precise 3D navigation of surgical robots. This technology is applicable to almost all seniors that require multimodal fusion, such as ultrasonic image fusion. The 3D fusion model is also of great significance for pre-arbitral planning and the post-arbitral evaluation.
When using this technology, you will have insight from multimodal imaging simultaneously. Different dimension perspective will unfold synchronously, and the diagnostic and therapeutic process will evolve. To begin, set the data resources from the CT machine station.
Open the single CT 2012 B software to receive data from the scanning protocol SpineRoutine_1. Use a slice thickness of one millimeter with a matrix size of 512 pixels by 512 pixels, in which the pixel spacing is 0.3320 millimeters. The actual size of the 3D volume achieved is 512 by 512 by 204 voxels.
Call the Dicom2Mat subprocess in the MATLAB workplace to obtain the 3D volume from the DICOM files stored in the HRCT data folder. View each slice within the 3D volume through the graphical user interface or GUI. Then visualize the intensity distribution of the vertebrae HRCT data by the heist function.
Call the noise clean subprocess to delete signal noise formed by the device under the HRCT data file parts. And use the vertebrae function subprocess under the same path, to gain the vertebrae model which is also a 3D volume, but only with the bone structure. Use the high-pass filter parameters and the intensity range from 190 to 1, 656.
Use the Dicom2Mat subprocess in both parts of the Dixon-In and Dixon_W sequences, and get their 3D volume. Visualize each individual slice that constitutes a 3D volume and access this visualization once the Dicom2Mat subprocess has been completed. Use the spinal nerve function to reconstruct the spinal nerve model with high-pass filter parameters and the intensity range from 180 to 643.
Filter out points with low intensity to extract the spinal nerve 3D volume as the nerve signals in the Dixon_W sequence are very high. Once the spinal nerve subprocess is finished check the model generated in the GUI. Copy the three 3D volumes to the file path of the project.
The models from HRCT and DIXON-In include the same vertebrae structure. And the models from DIXON-In and Dixon_W have the same coordinates. Put the three models file names into the vertebrae fusion subprocess as an input to generate the fusion model.
If fine tuning is necessary from the doctor's perspective, add coordinate parameters in all directions to the same function to correct the fusion model. If slight errors are observed in fusion from a clinical perspective, use the vertebra fusion function to fine tune the fusion coordinates. This process involves parameter adjustments to the six dimensions of coordinate direction.
Make a separate folder in the project directory for outputting the result of the fusion model. Export the fusion models to be used for 3D printing in the DICOM format sequences under the file path of the fusion directory. Utilize the mat2dicom algorithm to execute the export operation.
by inputting the fusion model. Open the DICOM file sequence exported previously using Materialise Mimics version 20 to perform the export operation. Navigate to the export menu under the file tab and select the VRML format.
The file path for the export can be freely customized according to the user's requirements. As transparent colorful 3D printing is a professional service, compress and pack the VRML files and send them to the service provider. The multimodal fusion model of CT and MRI is used for preoperative planning and training in Selective Dorsal Rhizotomy or SDR.
The GUI of slices in the volume from the HRCT data is shown in this figure. Through this GUI, surgeons can view the spine structure contained in all the CT data. The graphical image shown here represents the intensity distribution of vertebrae HRCT data.
This quantitative information helps to determine the filtering range of vertebrae structure. The 3D printing model for Selective Dorsal Rhizotomy or SDR planning and training is shown in this image. Different colored dyes are used to disdain and distinguish the structures, such as bones and nerves.
The spinal nerve structure is dyed yellow and the lamina of L4 and L5 segments in the corresponding operation area, are distinguished by red and blue staining. The bone structure is printed using a transparent resin material, which allows the doctors to observe the nerve structure under the lamina through the bone structure. Equivalent, insensitive, or multimodal fusion technology is bound to bring various new applications as doctors can obtain information from different dimensions in one model.
Medical imaging based diagnostic treatment and the surgical navigation are the main battlegrounds for multimodal fusion technology in the field of medical imaging.