The aim of this retrospective case study was to validate the accuracy of the use of protractors in a virtual reality environment. This was a blind study in which the surgeon's placement in VR was compared to the actual positioning used in surgery. Currently, optimal visualization of brain aneurysms and other intracranial vascular malformations for treatment and planning requires 3D rotational angiograms to be taken at the beginning of the procedure.
This exposes the patient to radiation and increases procedural times. Segmentation in virtual reality boosts surgeon confidence and increases understanding of complex surgical cases. The human body exists in a 3D space and thus viewing it in 3D space is much more indicative of pathological anatomy than viewing a 2D CT or MRI.
These findings impact research and medicine, reducing planning time for determining the imaging equipment angle in the OR.Currently, patients must be scanned with a 360 spin so C-arm position can be determined. We believe that we can reduce the time the patient is sedated and their radiation exposure by offering suggestions for C-arm positioning prior to surgery. Segmentation is the act of recreating 3D models of patient-specific anatomy from medical images.
It is the most critical step, yet it is resource-heavy in both software requirements and human expertise. As such, our primary research focus is automating segmentation, utilizing machine-learning techniques to decrease the barrier to entry for deploying this technology at scale. To begin, obtain 3D digital imaging and communications in medicine or DICOM dataset and import it into the segmentation software.
To create a rough mask of the target anatomy, go to the Segment tab, select New Mask Tool. To set the upper and lower threshold boundaries, click and drag them to capture maximum of the relevant target anatomy. Alternatively, input the desired Hounsfield unit and click okay to finalize the rough mask.
In the Segment tab, select Region Grow to separate all voxels of the mask directly connected to the desired voxel and use Edit Mask to add or remove voxels via both the 2D and 3D windows. Utilizing Multiple Slices Edit, add or remove voxels through interpolation between slices farther apart, and use the Smart Fill and then Fill Holes to fill the holes of a defined size within the mask. After verifying the accuracy of the 3D models from a physician, convert the obtained mask to 3D parts, and finally, export it as stl file.
Import all the anatomy files through FileImport.stl. Using the Move and Rotate tools, align the anatomy with the world's origin. After aligning the patient's nose with the perpendicular axis containing the ear and skull top, activate the orthographic views from the top right corner widget of the Blender interface.
To acquire C-arm angles in virtual reality, import the protractor. stl file. Scale the protractor quite small for ease of measurement and scale the protractor just outside for aneurysms.
Align zero degrees on the protractor with the patient's nose and orient the gap in the protractor arms towards the patient's feet and export as glb. To begin, prepare patient-specific segment anatomy and the VR model. Import the finalized model as gltf file in the Lesson Creation menu, and click Confirm in the pop-up, stating that the file type is not fully supported.
After pressing the joystick down, go to the Transparency menu and hide all models except the target aneurysm anatomy. Position the physician in VR and familiarize them with the anatomy lesson. Start the recording in the VR space and let the surgeon rotate the target anatomy to find the preferred angles for both anteroposterior and lateral fluoroscopy.
Then press the Stop button. Place an image mimicking the background of a fluoroscopy image behind the model in the virtual space. The surgeon will appear as a set of floating glasses and two controllers in VR.Place the camera in line with the surgeon's view, pointing roughly at the center of the target anatomy.
Then capture a 2D snapshot in the desired position and repeat for each preferred angle. To acquire C-arm angles, pause the recorded lesson when the surgeon declares a preferred viewing angle. Click the track back to open the quick menu and select the protractor on/off checkbox.
Using the controller Grab button, select and manipulate a pointer or straight edge in line with the surgeon's viewpoint to pass through the origin of the protractor. Step back from the model and view the angles from the orthographic viewpoints corresponding with the C-arm motions. Acquire the anteroposterior angles from the sagittal and axial planes followed by the lateral angles from the coronal and axial planes.
The axial plane corresponds with the right and left angles of the C-arm while the sagittal and coronal planes correspond with the cranial and caudal angles. The surgeon-declared anteroposterior angles in VR were similar to the angle measurements taken during the surgery and the actual surgical fluoroscopy image, whereas for the same case, discrepancies were observed in the lateral views.