Sign In

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • תוצאות
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

Here, we describe a protocol for constructing a heart model from scratch based on computed tomography and present it to medical students using three-dimensional (3D) printing and mixed reality technology to learn anatomy.

Abstract

Mixed reality technology and three-dimensional (3D) printing are becoming more and morecommon in the field of medicine. During the COVID-19 pandemic and immediately after the restrictions had beeneased, many innovations were implemented in the teaching of future doctors. There was also interest in immersive techniques and 3D printing technology in anatomy teaching. However, these are not common implementations. In 2023, 3D prints and holograms in mixed reality technology were prepared for classes focused on the structure of the heart. They were used to teach students, who, with the support of engineers, could learn about the detailed structure of the heart and familiarize themselves with the new technologies that support the traditional model of learning on human cadavers. Students findthis possibility to be highly valuable. The article presents the process of preparing materials for classes and further implementation possibilities. The authors see an opportunity for the development of the presented technologies in students'teaching at various levels of education and the justification for increasingly widespread implementation.

Introduction

Three-dimensional (3D) printing technology and mixed reality are more and more commonlyused technological achievements in medicine. More applications are being found not only in the daily clinical practice of many specialists from various fields but also in the teaching of residents and future doctors, i.e., medical students1,2,3,4,5,6.

3D printing technology is often used to print anatomical models, offered mainly by commercial entities, but the growing interest of students in this type of preparation for learning is an impulse for introducing innovations in the departments of anatomy at medical universities7. Preparations can be created based on data from anatomical atlases, drawings and engravings, but also based on imaging studies such as computed tomography or magnetic resonance imaging1,8,9. It is possible to print anatomical preparations on a 3D printer on various scales, and it is possible to use colors, markers, and other variations to increase the accessibility of theteaching material10,11. Despitetheincreasedavailability of materials, medical students in Poland do not have wide access to this type of preparation, regardless ofthe declared willingness to support the current, classic teaching model based on human cadaver preparations with the addition of new technologies not being fully implemented yet.

Mixed reality technology is the integration of the virtual world with the real world. Thanks to goggles that enable the visualization of previously prepared holograms, they can be "superimposed" on surrounding objects in the real world12. Holograms can be manipulated in space, e.g., enlarged, reduced, or rotated, making the viewed image better visualized, accessible, and more useful. Mixed reality is increasingly used by operators in surgical disciplines, e.g., cardiac surgery3,13 orthopedics14,15,16,17, oncology18. Increasingly, especially in the period after the COVID-19 pandemic, didacticians in the field of basic medical sciences are interested in thenew technologies, including mixed reality, in order to implement them into the education of future doctors19,20,21. Academic teachers teaching normal anatomy are also finding roomfor introducing mixed reality in their field22,23,24,25,26. Creating holograms requires an imaging study, most often computed tomography, which is rendered and processed by engineers using dedicated software into a holographic version - possible to use with goggles.

We decided to create useful materials for students to learn the anatomy of the human heart as part of anatomy classes in the first year of medical studies. For this purpose, an Angio-CT scan of the heart was used, made available from the Department of Cardiology after prior complete anonymization of the data. We, divided into two teams, created holograms and 3D prints, which were then made available to students as part of a pilot lesson. Students rated the accessibility and accuracy of the materials very well, but a detailed study on this topic will be presented later - the results are currently being evaluated.

Here, we show the process of creating models from computed tomography to presenting ready-made models implemented in the teaching practice.

Protocol

The protocol follows the guidelines of the Human Research Ethics Committee of the Medical University of Silesia. The patient's imaging data were used after complete anonymization.

1. 3D Printing - Segmentation and reconstruction of the 3D heart model

  1. Image upload and preprocessing
    1. Open 3D Slicer 5.6.0 and navigate to the data module27.
    2. Click Add Data and select the patient-specific CT images in DICOM format. Ensure that the images are uploaded in the correct orientation.
    3. Assess the quality of the images by inspecting axial, sagittal, and coronal views in the Slice Viewer. Verify sufficient contrast to distinguish between the myocardium and the heart chambers.
    4. If contrast is insufficient, adjust the Window/Level settings to enhance tissue differentiation using the Volumes Module. Set the Window to 350 HU and the Level to 40 HU as a starting point, and modify if necessary.
    5. Confirm the visibility of anatomical regions of interest (ROI), including the myocardium and internal heart chambers.
  2. Threshold-based segmentation
    1. Navigate to the Segment Editor module and click Add to create a new segmentation.
    2. Select Threshold from the segmentation tools. Set the Lower Thresold to 100 HU and the Upper Threshold to 300 HU to isolate soft tissues.
      NOTE: These values may vary depending on image quality and patient-specific characteristics.
    3. Adjust the threshold range manually to refine the ROI by dragging the sliders or inputting values until the myocardium and heart chambers are clearly isolated. Use visual inspection in the axial, sagittal, and coronal views to ensure proper selection.
    4. Confirm that all relevant anatomical areas are captured. If needed, switch to the Paint tool to manually add or remove areas of the segmentation that were not properly captured by thresholding.
    5. Click Apply to finalize the segmentation for the threshold-based selection (Figure 1)
  3. Manual slice-by-slice correction
    1. Using the Scissors or Erase tools in the Segment Editor, manually inspect each slice of the CT dataset. Correct any inaccuracies, such as those caused by artifacts or poor contrast, by removing or adding segmented regions as necessary.
    2. For each slice, focus on precisely identifying the myocardium and internal heart chambers. If ambiguities arise, consult a medical professional or anatomical reference to ensure accuracy.
    3. Separate the heart into two distinct segments: one for the myocardium and one for the internal chambers. Use the Create New Segment button to differentiate these structures.
    4. Continue the slice-by-slice inspection and correction until all slices across the axial, sagittal, and coronal planes are corrected and segmented.
  4. Post-processing and model export
    1. Import the exported STL files into MeshMixer (Figure 2, referred to as prototype design software).
      1. Begin by eliminating small artifacts and ensuring the model's uniformity by selecting Edit > Make Solid.
      2. In the pop-up window, choose Solid Type as Accuracy to retain the precise details of the segmentation. Adjust the Solid Accuracy slider to a value between 0.8 and 1.0 for optimal fidelity.
    2. After solidifying the model, proceed to manual artifact removal. Use the Erase & Fill tool to reconstruct any disrupted surface areas. Access this can be accessed under Select > Modify > Erase & Fill.
    3. Click and drag to select problematic areas, then use the Fill option to restore surface continuity. Ensure that the filled regions blend smoothly with the surrounding geometry.
    4. For general surface refinement, use the Select tool to highlight specific areas of the model that require smoothing. Once selected, navigate to Modify > Smooth and apply the tool iteratively.
    5. Adjust the Smooth Strength slider between 10 and 50%, depending on the severity of the surface irregularities. Be cautious to maintain anatomical accuracy while smoothing. Use Shift + Left Click to deselect areas that do not require modification.
    6. Once smoothing is complete, use the Inspector tool to automatically identify and fill any remaining holes in the mesh. Check the model visually to ensure no major artifacts or surface irregularities exist.
    7. To integrate the myocardium and internal heart chambers into one cohesive model, apply Boolean operations. Go to Edit > Boolean Union and select the two separate parts (myocardium and chambers) to merge them.
    8. Ensure the operation successfully joins the structures without creating internal holes or overlaps. Inspect the intersections and adjust as needed by manually refining the merge areas using Erase & Fill or Smooth (Figure 3).
    9. Once the model has been unified and refined, export the final STL file by selecting Export > STL for 3D printing preparation.
  5. Model preparation for 3D printing
    1. Material selection and printer settings
      1. Use acrylonitrile butadiene styrene (ABS) filament, which allows for easy post-processing, such as acetone smoothing.
        NOTE: ABS is sensitive to temperature fluctuations, so ensure a stable environment during printing.
      2. Opt for an enclosed 3D printer for better temperature control.
    2. Printer and slicer settings
      1. Printer model: Use the appropriate printer. Here, Creality Ender 3 with a custom-made metal enclosure was used.
      2. Filament material: Use ABS.
      3. Configure the following settings in Cura or similar slicing software.
        Nozzle Diameter: 0.5 mm
        Nozzle Temperature: ~240 °C (adjust based on filament brand)
        Bed Temperature: ~100 °C
        Layer Height: 0.24 mm
        Print Speed: ~100 mm/s (reduce to 50-60 mm/s for higher quality)
        Infill Density: 25% (to balance strength and material usage)
        Supports: Enable automatic supports (e.g., tree supports)
        Cooling Fan: Turn off to prevent warping
        ​Adhesion Aids: Use a brim or raft to improve bed adhesion
      4. Ensure calibration of the printer and adjust settings based on (a) Printer-specific tolerances, (b) Properties of the ABS filament, and (c) the desired trade-off between print speed and surface quality.
    3. Support structures and post-processing
      1. Support structures: Generate supports in the slicing software using built-in tools (e.g., Cura) to stabilize overhanging features during printing. Verify that supports do not interfere with delicate anatomical details.
      2. Support removal: Allow the printed model to cool completely to prevent damage during support removal. Remove supports carefully. Use needle-nose pliers for more extensive support. For smaller or delicate areas, gently remove supports by hand.
      3. Surface finishing: Inspect the printed model for rough areas, especially where supports were attached. Smooth these areas using Fine-grit sandpaper (e.g., 200-400 grit), small files for precise detailing, and aim for a clean, continuous surface to enhance anatomical accuracy.
      4. Advanced post-processing (Optional): If a polished finish is required, prepare a vapor-smoothing chamber with acetone and expose the model to acetone vapors for ~9 min (perform this step in a well-ventilated area with appropriate safety precautions [e.g., gloves, goggles]), and let the model dry completely before handling.
  6. Pausing points.
    1. Pause the protocol after each slice correction in step 1.3.1 by saving the project in 3D Slicer. Resume the segmentation later without loss of data.
    2. In step 1.4.1, after exporting the STL files, if required, pause post-processing steps as they do not require continuity.

2. Mixed reality

NOTE: Process the heart CT DICOM files into a holographic representation using CarnaLife Holo (referred to as mixed reality software).

  1. Prepare the hardware.
    1. Turn On the laptop and plug it into a power outlet. Turn the mixed reality headset On.
    2. Connect the router to the laptop.
  2. Load the CT image into the mixed reality headset from acquired CT DICOM files.
    1. Open the mixed reality software and log in (Figure 4).
    2. Select the appropriate folder with CT scans. Select the correct series of CT data (Figure 5).
    3. Check the IP address displayed when the headset is switched on and enter it in the designated place in the mixed reality software.
    4. Click on the Connect button to see the visualization on the mixed-reality headset.
  3. Segment the heart structure with a manual segmentation tool using the Scissors option (Figure 6). With it, mark areas that will be removed from CT data reconstruction by left click and dragging.
    1. End cutting region marking by clicking the left mouse button and then confirming cutting in the pop-up.
  4. Choose a predefined preset (color visualization parameters) suitable for heart structure visualization from a list of available presets by clicking on its name: CT CARDIAC HOLLOW.
    1. If needed, adjust the visualization by changing the window using right click and hold while moving the cursor in the 3D View.
  5. Load 3D surface models of left and right ventricles and atriums.
    1. Click on the 3D Models section in the mixed reality software. Click on the Load Models button.
    2. Navigate to the folder with surface models. Select all four files and confirm by clicking Open. Adjust the colors of visualized models (Figure 7).
      1. Click on the Pencil icon on the 3D models list. Click the Aspect tab on the visible pop-up.
      2. Click on the white square next to the Color label. Select a suitable color with the Color Picker pop-up. Confirm by clicking the OK button. Left-click on 3D View.
      3. Repeat all steps for the remaining surface models.
  6. Create annotations of anatomical structures on 2D views by utilizing three 2D views (axial, sagittal, and coronal) to place the annotation point in the appropriate spot.
    1. Click on the Annotate section in the software.
    2. On the right side of the app window (in the default app layout) are three 2D views of reconstructed data.
      1. Go through slices by clicking Single- or Double-Arrow icons next to the slider on the right-hand side of every 2D View.
      2. Change the slice by clicking and holding the left Shift button while scrolling with the mouse wheel.
      3. Change the slice by dragging blue, red, or green lines (2D plane representations).
    3. After setting up the correct slice on the selected 2D View, zoom with the mouse wheel and place the annotation point by left clicking. Annotation will be created in the clicked spot.
    4. Go back to the Annotate section and click the Pencil icon on the annotation in the annotations list with the corresponding ID number.
    5. In the lower part of the pop-up, enter the text of the annotation, e.g., "Left ventricle".
    6. Adjust colors, thickness, and sizes of annotation in this pop-up. Go back to the 2D View with the placed annotation.
    7. Grab and move the label of annotation outside of the 2D plane to a suitable place.
    8. Repeat all steps for all anatomical structures that need to be annotated.
  7. Load visualization state to obtain saved annotations of anatomical structures in visualization.
    1. Click the Load File icon next to the Floppy Disc icon in the upper right corner of the 3D View. In the pop-up, click the Folder icon, navigate to the directory with the saved visualization state file, and click Select folder.
    2. If selected correctly and if there is a valid file for this particular data, a list of applicable visualization state files will replace the No files found disclaimer with names of states that the user can load.
    3. Left click on a suitable visualization state to select it and confirm by clicking the Load button. After loading the user will be prompted with the status of loading visualization state.
  8. To see the prepared visualization in the holographic space, put on the headset and use the voice command Locate here to bring 3D holographic CT scan reconstruction in front of the eyes. Adjust it using voice commands, e.g., Rotate, Zoom, Cut Smart, and combine it with hand gestures (Figure 8).
  9. Use the Cut Smart voice command to apply and adjust the cutting plane perpendicular to the line of sight.
  10. Move and rotate the head to translate the movement and orientation of the applied cutting plane. Come close to the hologram to move the cutting plane deeper into the holographic reconstruction. Rotate the head 90° clockwise to rotate the cutting plane 90° clockwise etc.
  11. Perform these movements to see the inner parts of the heart structure holographic visualization and previously loaded surface models and annotations of anatomical structures.

תוצאות

The segmentation and 3D reconstruction protocol yielded two primary outputs for anatomy training: a 3D printed heart model and a 3D MR visualization of the heart. These results, which utilize patient-specific CT data, provide complementary tools for students to engage in hands-on and immersive learning experiences.

The 3D printed heart model allows students to physically interact with a tangible representation of cardiac anatomy. This model presents distinct external features, such as the myocardium, as well as internal structures, including the chambers and valves. In successful experiments, the anatomical accuracy was high, with well-defined features and minimal artifacts after post-processing. Figure 9 shows a fully processed 3D printed model with clear differentiation between the myocardium and internal chambers. In cases where the contrast in the CT images was suboptimal, segmentation errors led to inaccuracies in the model, such as irregular chamber sizes or incomplete valve structures. These issues were often correctable with manual intervention, including additional smoothing and artifact removal, as highlighted in Figure 10.

In contrast, the 3D Mixed Reality visualization offers a dynamic and interactive experience where students can explore the heart in virtual space. The MR environment provides real-time interaction, including rotation, zoom, and sectioning through different anatomical planes, allowing for a more detailed understanding of complex structures like the coronary arteries or septal walls. Successful implementations of MR visualization presented highly accurate representations of both the external and internal anatomy. However, suboptimal visualizations (e.g., where the segmentation was flawed) led to distorted views of internal structures, affecting the MR model's realism and teaching effectiveness (Figure 11). For the anatomical structures which are complex the segmentation approach might not be sufficient. Thanks to the possibility of volumetric rendering it is possible to visualize different densities (represented by Hounsfield Units) that are important to understand the anatomy (Figure12).

The techniques offer robust, complementary tools that enhance the learning experience by providing accurate and manipulable models, although their success is dependent upon the quality of segmentation and reconstruction in the initial steps of the protocol. Overall, these results demonstrate the protocol's effectiveness in creating precise heart models from patient-specific CT data. These results demonstrate the protocol's effectiveness in creating precise heart models from patient-specific CT data.

A preliminary study was conducted to evaluate students' perceptions of mixed reality technology in anatomy education-specifically in learning the structure of the heart. The study involved 106 students who, under the supervision of engineers, were able to utilize holograms for learning purposes. At the end of the session, they were asked: "Did mixed reality technology help you better understand the topic-the structure of the heart?" All respondents (100%) answered "yes." Students' knowledge was assessed immediately after the session through a short written test requiring them to describe three anatomical structures related to the heart's morphology. The average score was 2.037 against a total score of 3 (Table 1).

figure-results-3772
Figure 1: CT segmentation of heart. Axial (top left), coronal (bottom left), sagittal (bottom right), and 3D (top right) views of CT segmentation on the 3D Slicer software. Please click here to view a larger version of this figure.

figure-results-4306
Figure 2: Post-processing. Views of segmentation 3D models on the prototype design software. Please click here to view a larger version of this figure.

figure-results-4760
Figure 3: After post processing. Views of segmentation 3D models on the prototype design software. Please click here to view a larger version of this figure.

figure-results-5225
Figure 4: View of the Mixed Reality software. Application start screen. Clear and accessible login panel. Please click here to view a larger version of this figure.

figure-results-5692
Figure 5: Selecting the correct series in the mixed reality software. Selection of available computed tomography images for holographic visualization. Please click here to view a larger version of this figure.

figure-results-6204
Figure 6: Scissors option for cutting out parts of visualization in the mixed reality software. A tool that allows one to adjust the hologram to the user's needs in real time. Please click here to view a larger version of this figure.

figure-results-6745
Figure 7: Adjusting colors of the holographic visualization in the mixed reality software. Adding colors to the visualization increases the accessibility and clarity of holograms. Please click here to view a larger version of this figure.

figure-results-7286
Figure 8: Visualizations in holographic space created with the mixed reality software. A three-dimensional hologram with highlighted colors and computed tomography markers to aid orientation in space. Please click here to view a larger version of this figure.

figure-results-7853
Figure 9: After post-processing and boolean operation "x-ray" preview. View of 3D models on the prototype design software. Fully processed 3D printed model with clear differentiation between the myocardium and internal chambers. Please click here to view a larger version of this figure.

figure-results-8451
Figure 10: After cutting the model in a four-chamber projection, the final 3D printed part preview. View of 3D models on the prototype design software. Additional smoothing and artifact removal. Please click here to view a larger version of this figure.

figure-results-9008
Figure 11: Visualization of CT data in the mixed reality software. Surface rendering represents the result of over-segmentation. Please click here to view a larger version of this figure.

figure-results-9499
Figure 12: Exemplary visualization of CT data in the mixed reality software. Volume rendering, which visualizes different densities. Please click here to view a larger version of this figure.

Total number of students (n)106
Number of students who used holograms for learning purposes (n)106
Number of students answering "YES" to the question "Did mixed reality technology help you better understand the topic—the structure of the heart?" (n)106
Number of students answering "NO" to the question "Did mixed reality technology help you better understand the topic—the structure of the heart?" (n)0
Minimum score0
Maximum score3
Average score of the students who took a short written test to describe three anatomical structures related to the heart's morphology2.037
Total score3

Table 1: Preliminary data of the study.

Discussion

Modern anatomy is based primarily on classic, proven methods known for hundreds of years. Human cadavers are the basis for teaching future doctors, and anatomists emphasize their role not only in understanding the structures of the human body but also in shaping ethical attitudes28,29. Developing technology is expansive not only in everyday clinical procedures, but also in teaching, hence the attempt to implement 3D printing7,30,31,32, and mixed reality in anatomyteaching33,34,35,36. Currently, the work of doctors is largely based on modern solutions, equipment, and broadly understood digitization, and the increasing share of automation, robotization, and implementation of innovative solutions will progress, taking into account the trend that has been ongoing for years.

Supplementing classic forms of education with 3D printing, classes using mixed reality, or ultrasound can have a very positive impact on the preparation of future doctors for the profession, not only because of the opportunity to acquire more knowledge and compare visualizations in various types of imaging techniques, but also because of contact with new technologies, becoming familiar with their use, and giving an impulse to think about new applications, especially in the area of interest37.

Preparing models in 3D printing technology, as well as holograms in mixed reality technology, requires greater than standard commitment, planning their creation, and gaining freedom in conducting classes using them. It should be added that these are expensive solutions, especially mixed reality, which requires devices that can display holograms (goggles), engineering facilities - including an application and its operation. 3D printing, due to its greater popularity and lower costs38, is easier to implement but requires planning the purchase of a printer and filament if the anatomy department would like to create its own models from scratch and software for creating images ready for printing from DICOM imaging studies.

CarnaLife Holo enables users to upload both CT data and segmentation results, providing a unique approach rarely applied in the MR domain. Current state-of-the-art techniques typically visualize 3D models using surface rendering based on STL or OBJ files39,40. Consequently, users can only access segmentation results, with limited ability to directly view original data. This can pose challenges when analyzing small structures or pathologies, such as calcifications, where segmentation precision is critical.

Through raw data visualization (volume rendering), users can evaluate structures not only by geometry but also by analyzing the distribution of Hounsfield units (density) within the structure. Automatic heart segmentation, a common technique that facilitates the tedious task of manual segmentation, has its limitations41. It is constrained by the number of structures it can segment, especially in the presence of pathologies, and requires high-performance hardware for efficient processing.

To address these challenges, a combination of two visualization methods - volume rendering and surface rendering - has been proposed. This hybrid approach allows simultaneous visualization of segmented structures and the distribution of values within the analyzed data, offering users a more comprehensive tool for data interpretation.

In the case of heart anatomy, creating a 3D model is complicated because standard automatic tools in the program are insufficient to extract heart tissue from a full image due to the heterogeneity of size, shape, position of anatomical structures, presence of artifacts, and blurred boundaries (low contrast) between adjacent tissues. Therefore, in addition to threshold segmentation, segmentation supervised by a physician in the "slice by slice" mechanism should be performed. The next stage is the adaptation of the model to 3D printing, which includes further removal of distortions resulting from noise during image acquisition. After printing, the models are gently dissolved in acetone to obtain a smoother surface. The use of ready-made models by students is simple - analogous to viewing and discussing human cadaver preparations. In the case of mixed reality, each time, training is required in the use of the technology - correct attachment of goggles to the head, as well as voice and gesture control. Due to the limited availableequipment, it is not possible to have a larger number of students participating at the same time. In order to increase the accessibility of the imaged material, markers of specific anatomical structures were used to facilitate faster discussion of the preparations - holograms.

Mastering the segmentation and 3D reconstruction process in 3D Slicer can be challenging for beginners, as it involves learning multiple functionalities and workflows. Developing proficiency typically requires significant practice and experience. In our observations, achieving confidence with the software demanded approximately 20-30 h of dedicated work, which included segmenting at least 5-7 distinct heart models. 3D Slicer is an open-source platform that benefits from a robust online community. It offers extensive troubleshooting resources, problem-solving forums, and a wealth of tutorials and use cases. These resources facilitate the learning process by providing accessible guidance. Additionally, utilizing tools such as large language models (LLMs), including ChatGPT or Gemini, can further enhance understanding of the software and its features. During the learning phase, access to a mentor or supervisor experienced in medical imaging and anatomy proves highly advantageous. Immediate feedback on segmentation strategies and accuracy accelerates skill development and ensures that anatomical precision is maintained. Beginners should anticipate that initial attempts may be time-consuming and prone to errors. However, consistent practice makes segmentation and refinement processes significantly more intuitive and efficient. It is essential to approach this learning curve with patience, as steady engagement with the tool substantially improves speed and accuracy.

The critical steps of the presented protocol were the proper segmentation and extraction of heart tissue from the imaging study in order to create a three-dimensional model that is useful for 3D printing and mixed reality technologies.

The heart anatomy lesson using 3D printing and mixed reality technology was very well received by students, and the vast majority found the technological support useful - allowing for a better understanding of the topic discussed. According to the authors, new technologies should support the existing, classic didactic solutions and be increasingly widely used.

Disclosures

Maciej Stanuch, Marcel Pikuła, Oskar Trybus, and Andrzej Skalski are MedApp S.A. employees. MedApp S.A. is the company that manufactures the CarnaLifeHolo solution.

Acknowledgements

The study was carried out as part of non-commercial cooperation.

Materials

NameCompanyCatalog NumberComments
3D SlicerThe Slicer Communityhttps://www.slicer.orgVersion 5.6.0
CarnaLifeHolo MedApp S.A.https://carnalifeholo.com3D visualization software
MeshmixerAutodesk Inc.https://www.research.autodesk.com/projects/meshmixer/prototype design software
Ender 3 Creality https://www.creality.com/products/ender-3-3d-printer3D printer
CuraUltiMaker https://ultimaker.com/software/ultimaker-cura/3D printing software

References

  1. Marconi, S. et al. Value of 3D printing for the comprehension of surgical anatomy. Surg endosc. 31, 4102-4110 (2017).
  2. Bernhard, J. C. et al. Personalized 3D printed model of kidney and tumor anatomy: a useful tool for patient education. World J Urol. 34 (3), 337-345 (2016).
  3. Gehrsitz, P. et al. Cinematic rendering in mixed-reality holograms: a new 3D preoperative planning tool in pediatric heart surgery. Front Cardiovasc Med. 8, 633611 (2021).
  4. Vatankhah, R. et al. 3D printed models for teaching orbital anatomy, anomalies and fractures. J Ophthalmic Vis Res. 16 (4), 611-619 (2021).
  5. O'Reilly, M. K. et al. Fabrication and assessment of 3D printed anatomical models of the lower limb for anatomical teaching and femoral vessel access training in medicine. Anat Sci Educ. 9 (1), 71-79 (2016).
  6. Garas, M. et al. 3D-Printed specimens as a valuable tool in anatomy education: A pilot study. Ann Anat. 219, 57-64 (2018).
  7. AbouHashem, Y. et al. The application of 3D printing in anatomy education. Med Educ Online. 20, 29847 (2016).
  8. Wu, A. M. et al. The addition of 3D printed models to enhance the teaching and learning of bone spatial anatomy and fractures for undergraduate students: a randomized controlled study. Ann Transl Med. 6 (20), 403 (2018).
  9. McMenamin, P. G. et al. The production of anatomical teaching resources using three-dimensional (3D) printing technology. Anat Sci Educ. 7 (6), 479-486 (2014).
  10. Tan, L. et al. Full color 3D printing of anatomical models. Clin Anat. 35 (5), 598-608 (2022).
  11. Garcia, J. et al. 3D printing materials and their use in medical education: a review of current technology and trends for the future. BMJ Simul Technol Enhanc Learn. 4 (1), 27-40 (2018).
  12. Milgram, P. et al. Augmented reality: A class of displays on the reality-virtuality continuum. Proceedings of the International Society for Optical Engineering. (SPIE 1994), Photonics for Industrial Applications; Boston, MA. The International Society for Optical Engineering, Bellingham, WA (1994).
  13. Brun, H. et al. Mixed reality holograms for heart surgery planning: first user experience in congenital heart disease. Eur Heart J Cardiovasc Imaging. 20 (8), 883-888 (2019).
  14. Lu, L. et al. Applications of mixed reality technology in orthopedics surgery: A pilot study. Front Bioeng Biotechnol. 22 (10), 740507 (2022).
  15. Condino, S. et al. How to build a patient-specific hybrid simulator for orthopaedic open surgery: benefits and limits of mixed-reality using the Microsoft HoloLens. J Healthc Eng. 2018, 5435097 (2018).
  16. Wu, X. et al. Mixed reality technology launches in orthopedic surgery for comprehensive preoperative management of complicated cervical fractures. Surg Innov. 25, 421-422 (2018).
  17. Łęgosz, P. et al. The use of mixed reality in custom-made revision hip arthroplasty: A first case report. J Vis Exp. 186, e63654 (2022).
  18. Wierzbicki, R. et al. 3D mixed-reality visualization of medical imaging data as a supporting tool for innovative, minimally invasive surgery for gastrointestinal tumors and systemic treatment as a new path in personalized treatment of advanced cancer diseases. J Cancer Res Clin Oncol. 148 (1), 237-243 (2022).
  19. Wish-Baratz, S. et al. Assessment of mixed-reality technology use in remote online anatomy education. JAMA Netw Open. 3 (9), e2016271 (2020).
  20. Owolabi, J., Bekele, A. Implementation of innovative educational technologies in teaching of anatomy and basic medical sciences during the COVID-19 pandemic in a developing country: The COVID-19 silver lining? Adv Med Educ Pract. 8 (12), 619-625 (2021).
  21. Xiao, J., Evans, D. J. R. Anatomy education beyond the Covid-19 pandemic: A changing pedagogy. Anat Sci Educ. 15 (6), 1138-1144 (2022).
  22. Robinson, B. L., Mitchell, T. R., Brenseke, B. M. Evaluating the use of mixed reality to teach gross and microscopic respiratory anatomy. Med Sci Educ. 30 (4), 1745-1748 (2020).
  23. Ruthberg, J. S. et al. Mixed reality as a time-efficient alternative to cadaveric dissection. Med Teach. 42, 896-901 (2020).
  24. Stojanovska, M. et al. Mixed reality anatomy using microsoft hololens and cadaveric dissection: a comparative effectiveness study. Med Sci Educ. 30, 173-178 (2020).
  25. Zhang, L. et al. Using Microsoft HoloLens to improve memory recall in anatomy and physiology: a pilot study to examine the efficacy of using augmented reality in education. J Educ Tech Dev Exch. 12 (1), 17-31 (2020).
  26. Vergel, R. S. et al. Comparative evaluation of a virtual reality table and a HoloLens-based augmented reality system for anatomy training. IEEE Trans Hum Mach Syst. 50 (4), 337-348 (2020).
  27. Fedorov, A. et al. 3D slicer as an image computing platform for the quantitative imaging network. Magn Reson Imaging. 30 (9), 1323-1341 (2012).
  28. Boulware, L. E. et al. Whole body donation for medical science: a population-based study. Clin Anat. 17 (7), 570-577 (2004).
  29. Arráez-Aybar, L. A., Bueno-López, J. L., Moxham, B. J. Anatomists' views on human body dissection and donation: An international survey. Ann anat. 196 (6), 376-386 (2014).
  30. Vaccarezza, M., Papa, V. 3D printing: a valuable resource in human anatomy education. Anat Sci Int. 90 (1), 64-65 (2015).
  31. Smith, C. F., Tollemache, N., Covill, D., Johnston, M. Take away body parts! An investigation into the use of 3D-printed anatomical models in undergraduate anatomy education. Anat Sci Educ. 11 (1), 44-53 (2018).
  32. Lim, K. H. et al. Use of 3D printed models in medical education: A randomized control trial comparing 3D prints versus cadaveric materials for learning external cardiac anatomy. Anat Sci Educ. 9 (3), 213-221 (2016).
  33. Richards, S. Student engagement using HoloLens mixed-reality technology in human anatomy laboratories for osteopathic medical students: an instructional model. Med Sci Educ. 33 (1), 223-231 (2023).
  34. Veer, V., Phelps, C., Moro, C. Incorporating mixed reality for knowledge retention in physiology, anatomy, pathology, and pharmacology interdisciplinary education: a randomized controlled trial. Med Sci Educ. 32 (6), 1579-1586 (2022).
  35. Romand, M. et al. Mixed and augmented reality tools in the medical anatomy curriculum. Stud Health Technol Inform. 270, 322-326 (2020).
  36. Birt, J. et al. Mobile mixed reality for experiential learning and simulation in medical and health sciences education. Information. 9 (2), 31 (2018).
  37. Kazoka, D., Pilmane, M., Edelmers, E. Facilitating student understanding through incorporating digital images and 3D-printed models in a human anatomy course. Educ Sci. 11 (8), 380 (2021).
  38. Shen, Z. et al. The process of 3D printed skull models for anatomy education. Comput Assist Surg (Abingdon). 24 (1), 121-130 (2019).
  39. Ye, W. et al. Mixed-reality hologram for diagnosis and surgical planning of double outlet of the right ventricle: a pilot study. Clin Radiol. 76 (3), 237.e1-237.e7 (2021).
  40. Bonanni, M. et al. Holographic mixed reality for planning transcatheter aortic valve replacement. Int J Cardiol. 412, 132330 (2024).
  41. Chen, L. et al. Automatic 3D left atrial strain extraction framework on cardiac computed tomography. Comput Methods Programs Biomed. 252, 108236 (2024).

Reprints and Permissions

Request permission to reuse the text or figures of this JoVE article

Request Permission

Explore More Articles

Medicine

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2025 MyJoVE Corporation. All rights reserved