サインイン

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Representative Results
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

A computational framework was developed to automatically reconstruct virtual scenes comprising 3D avatars of soldiers using image and/or video data. The virtual scenes help estimate blast overpressure exposure during weapon training. The tool facilitates a realistic representation of human posture in blast exposure simulations in various scenarios.

Abstract

Military personnel involved in weapon training are subjected to repeated low-level blasts. The prevailing method of estimating blast loads involves wearable blast gauges. However, using wearable sensor data, blast loads to the head or other organs cannot be accurately estimated without knowledge of the service member's body posture. An image/video-augmented complementary experimental-computational platform for conducting safer weapon training was developed. This study describes the protocol for the automated generation of weapon training scenes from video data for blast exposure simulations. The blast scene extracted from the video data at the instant of weapon firing involves the service member body avatars, weapons, ground, and other structures. The computational protocol is used to reconstruct service members' positions and postures using this data. Image or video data extracted from service member body silhouettes are used to generate an anatomical skeleton and the key anthropometric data. These data are used to generate the 3D body surface avatars segmented into individual body parts and geometrically transformed to match extracted service member postures. The final virtual weapon training scene is used for 3D computational simulations of weapon blast wave loading on service members. The weapon training scene generator has been used to construct 3D anatomical avatars of individual service member bodies from images or videos in various orientations and postures. Results of the generation of a training scene from shoulder-mounted assault weapon system and mortar weapon system image data are presented. The Blast Overpressure (BOP) tool uses the virtual weapon training scene for 3D simulations of blast wave loading on the service member avatar bodies. This paper presents 3D computational simulations of blast wave propagation from weapon firing and corresponding blast loads on service members in training.

Introduction

During military training, service members and instructors are frequently exposed to low-level blasts with heavy and light weapons. Recent studies have shown that blast exposure could lead to decreased neurocognitive performance1,2 and alterations in blood biomarkers3,4,5,6. Repeated low-level blast exposure results in challenges in maintaining optimal performance and minimizing the risk of injury7,8. The conventional approach using wearable pressure sensors has drawbacks, particularly when it comes to precisely determining blast pressures on the head9. The known adverse effects of repeated low-level blast exposure on human performance (e.g., during training and in operational roles) exacerbate this problem. Congressional mandates (Sections 734 and 717) have stipulated the requirement for monitoring of blast exposure in training and combat and its inclusion in the service member's medical record10.

Wearable sensors can be used to monitor the blast overpressure during these combat training operations. However, these sensors are influenced by variables such as body posture, orientation, and distance from the blast source due to the complex nature of blast wave interactions with the human body9. The following factors affect pressure distribution and sensor measurements9:

Distance from the blast source: Pressure intensity varies with distance as the blast wave disperses and attenuates. Sensors closer to the blast record higher pressures, impacting data accuracy and consistency.

Body posture: Different postures expose various body surfaces to the blast, altering pressure distribution. For example, standing versus crouching results in different pressure readings9,11.

Orientation: The angle of the body relative to the blast source affects how the pressure wave interacts with the body, leading to discrepancies in readings9. Physics-based numerical simulations provide more accurate assessments by systematically accounting for these variables, offering a controlled and comprehensive analysis compared to wearable sensors, which are inherently influenced by these factors.

In response to these challenges, there has been a concerted effort to develop more sophisticated tools. In this direction, the Blast Overpressure (BOP) tool is developed. This tool is developed to estimate overpressure exposure under varying service member postures and positions around the weapon systems. There are two different modules under the BOP tool11. They are (a) the BOP tool SCENE module and (b) the BOP tool SITE module. These modules are used to estimate blast overpressure during weapon firing12. The BOP SCENE module is developed to estimate blast overpressures experienced by individual service members or instructors participating in a training scenario, while the BOP SITE module reconstructs a bird's-eye view of the training course, depicting the blast overpressure zones generated by multiple firing stations. Figure 1 shows a snapshot of both the modules. Currently, the BOP Tool modules comprise weapon blast overpressure characteristics (equivalent blast source term) for four DoD-defined Tier-1 weapon systems, including the M107 .50 cal Special Application Sniper Rifle (SASR), M136 Shoulder-Mounted Assault Weapon, M120 Indirect Fire Mortar, and Breaching charges. The term weapon blast kernel refers to an equivalent blast source term developed to replicate the same blast field surrounding a weapon system as that of the actual weapon. A more detailed description of the computational framework used for the development of the BOP tool is available for further reference11. The overpressure simulations are run using the CoBi-Blast solver engine. This is a multiscale multiphysics tool for simulating blast overpressures. The blast modeling capabilities of the engine are validated against experimental data from literature12. This BOP Tool is currently being integrated into the Range Managers Toolkit (RMTK) for use on different weapon training ranges. RMTK is a multi-service suite of desktop tools designed to meet the needs of range managers throughout the Army, Marine Corps, Air Force, and Navy by automating range operations, safety, and modernization processes.

figure-introduction-5044
Figure 1: Graphical user interface (GUI) for BOP tool SCENE module and BOP tool SITE module. The BOP SCENE module is designed to estimate the blast overpressures on service member and instructor body models, while the BOP SITE module is intended to provide an estimation of the overpressure contours on a plane that represents the training field. The user has the option to choose the height at which the plane is situated relative to the ground. Please click here to view a larger version of this figure.

One limitation of the existing BOP tool SCENE module is its use of manually estimated data for building virtual service member body models, including their anthropometry, posture, and position. Manual generation of the virtual service members in the appropriate posture is labor-intensive and time-consuming11,12. The legacy BOP tool (legacy approach) uses a database of pre-configured postures to build the weapon training scene based on the image data (if available). Furthermore, since the postures are approximated manually through visual appraisal, correct postures may not be captured for a complex postural setting. As a result, this approach introduces inaccuracies in the estimated overpressure exposure for individual service members (as a change in posture can modify the overpressure exposure on more vulnerable regions). The paper presents improvements that were made to the existing computational framework to enable rapid and automatic generation of service member models using existing state-of-the-art pose estimation tools. This paper discusses the enhancement of the BOP tool, particularly emphasizing the development of a novel and rapid computational pipeline for reconstructing blast scenes using video and image data. The improved tool can also reconstruct detailed body models of service members and instructors at the moment of weapon firing, utilizing video data to create personalized avatars compared to the legacy approach. These avatars accurately reflect the service members' posture. This work streamlines the process of generating blast scenes and facilitates a more rapid inclusion of blast scenes for additional weapon systems, significantly reducing the time and effort required for weapon training scene creation. Figure 2 shows a schematic of the enhanced computational framework discussed in this paper.

figure-introduction-7816
Figure 2: Schematic showing the overall process flowchart in the computational framework. The different steps include image/video data processing, virtual warfighter generation, blast scene reconstruction, and blast overpressure simulations. Please click here to view a larger version of this figure.

The paper presents the automated approach being implemented into the BOP Tool, which represents a significant improvement in the computational tools available for estimating the overpressure exposure during training and operations. This tool distinguishes itself through its rapid generation of personalized avatars and training scenarios, allowing for immersive blast overpressure simulations. This marks a significant departure from the traditional reliance on population-averaged human body models, offering a more precise and individualized approach.

Computational tools used in the automation process
The automation of virtual service member model generation is a multi-step process that leverages advanced computational tools to transform raw image or video data into detailed 3D representations. The entire process is automated but can be adapted to allow manual input of known measurements if needed.

3D pose estimation tools: At the core of the automation pipeline are the 3D pose estimation tools. These tools analyze the image data to identify the position and orientation of each joint in the service member's body, effectively creating a digital skeleton. The pipeline currently supports Mediapipe and MMPose, which offer Python APIs. However, the system is designed with flexibility in mind, allowing for the incorporation of other tools, such as depth cameras, provided they can output the necessary 3D joint and bone data.

Anthropometric model generator (AMG): Once the 3D pose is estimated, the AMG comes into play. This tool utilizes pose data to create a 3D skin surface model that matches the service member's unique body dimensions. The AMG tool allows for either automated or manual input of anthropometric measurements, which are then linked to principal components within the tool to morph the 3D body mesh accordingly.

OpenSim skeletal modeling: The next step involves the open-source OpenSim platform13, where a skeletal model is adjusted to align with the 3D pose data. Pose estimation tools do not enforce consistent bone lengths in the skeleton, which can lead to unrealistic asymmetry in the body. The use of an anatomically correct OpenSim skeleton produces a more realistic bone structure. Markers are placed on the OpenSim skeleton to correspond with the joint centers identified by the pose estimation tool. This skeletal model is then rigged to the 3D skin mesh using standard animation techniques.

Inverse kinematics and Python scripting: To finalize the pose of the virtual service member, an inverse kinematics algorithm is employed. This algorithm adjusts the OpenSim skeleton model to best match the estimated 3D pose. The entire posing pipeline is fully automated and implemented in Python 3. Through the integration of these tools, the process of generating virtual service member models has been significantly expedited, reducing the time required from days to seconds or minutes. This advancement represents a leap forward in the simulation and analysis of weapon training scenarios, providing rapid reconstructions of specific scenarios documented using images or video.

Protocol

The images and videos used in this study were not directly obtained by the authors from human subjects. One image was sourced from free resources on Wikimedia Commons, which is available under a public domain license. The other image was provided by collaborators at the Walter Reed Army Institute of Research (WRAIR). The data obtained from WRAIR was unidentified and was shared in accordance with their institutional guidelines. For the images provided by WRAIR, the protocol followed the guidelines of WRAIR's human research ethics committee, including obtaining all necessary approvals and consents.

1. Accessing the BOP Tool SCENE module

  1. Open the BOP Tool SCENE Module from the BOP Tool Interface by clicking the BOP Tool SCENE Module button.
  2. Click on Scenario Definition and then click on the Scenario Detail tab.

2. Reading and processing the image data

  1. Click the POSETOOL IMPORT button in the BOP tool SCENE module. A pop-up window opens, asking the user to select the relevant image or video.
  2. Navigate to the folder and select the relevant image/video using mouse operations.
  3. Click Open in the window pop-up once an image is selected.
    NOTE: This operation reads the image or video file, generates an image if a video is selected, runs the pose estimation algorithm in the background, generates the virtual service member models involved in the weapon training scene, and loads them into the BOP Tool SCENE module in their respective positions.

3. Configuring the weapon and shooter

  1. Choose a name for the scenario using the text box under Scenario Name. This is a user choice. For the scenario discussed here, the developers have chosen the scenario name BlastDemo1.
  2. Choose a custom name under the Name field under the Weapon Definition. This is, again, a user choice. For the scenario discussed here, the developers have chosen the weapon name 120Mortar.
  3. Select the appropriate weapon class, e.g., HEAVY MORTAR, from the list of options in the drop-down menu.
  4. Select the weapon (M120 in this case) from the list of choices in the drop-down. Upon selecting the weapon system, the corresponding blast kernel will be automatically loaded into the GUI under the Charges subtab.
    NOTE: The drop-down will have the different possible weapon systems specific to the weapon class chosen above.
  5. Select the Ammunition Shell (Full Range Training Round) for the chosen weapon system using the drop-down options.
  6. Select No Shooter under the Anthropometry, Posture, Helmet, and Protective armor fields under the Shooter Definition from the drop-down options. See Figure 3 for configured shooter and service members.
    NOTE: Since the shooter will be included in the scene imported through step 4, no shooter was selected under the Shooter Definition.

figure-protocol-3302
Figure 3: Imported blast scene from the image data. The blast scene is visible in the visualization window on the right-hand side of the GUI tool. Please click here to view a larger version of this figure.

4. Configuring the service members

  1. Click on the Service Members tab. This tab controls the position and orientation of the service members imported from the image data; use the X, Y, Z, and rotation options to do this. For the avatar corresponding to the assistant gunner, the authors have adjusted the Y position from 2.456 to 2 for a more reasonable assistant gunner position.
  2. Choose custom names for all the virtual service member models imported into the GUI using the Name field under the Service Members tab. S1, S2, S3, S4, and S5 are chosen here for all the automatically imported service member models.
    NOTE: For the scenario discussed here, they were named S1 and S2. The other fields are automatically filled. Users can modify them to refine the automatically estimated position/postures.
  3. Use the Delete button to remove S2, S4, and S5. In the scenario demonstrated here, only the shooter and assistant gunner are present at the time of firing.
    NOTE: This was performed to demonstrate the custom user options to delete or add service member models as required to an existing scene.

5. Configuring the virtual sensors

  1. Navigate to the Sensors tab. Add a new virtual sensor by clicking on Add Sensor.
  2. Choose a custom name under the Name field. The developers have chosen V1 for demonstration in this paper.
  3. Select the type as Virtual by clicking the Drop-down under the Sensor Type field.
  4. Choose the location of the sensor to be (-0.5, 2, 0.545) by editing the text boxes under the X, Y, and Z fields. For the scenario discussed here, the developers have created four different sensors at four different locations for demonstration purposes. The developers have chosen V2, V3, and V4 as the sensor names for the additional sensors.
  5. Repeat the steps from 5.1 to 5.4 to create additional sensors at (-0.5, 1, 0.545), (-0.5, 0.5, 0.545), and (-0.5, 0, 0.545). Leave the Rotation value as zero.
    NOTE: The user can also choose planar sensors during which the rotation can be used to adjust the orientation of the sensor with respect to the blast charge.

6. Saving and running the program

  1. Save the weapon training scene by clicking the Save Scenario button in the GUI at the top.
  2. Run the blast overpressure simulation for the M120 weapon training scene by clicking the Run Scenario button at the bottom. The simulation progress is shown using the progress bar at the bottom.

7. Visualizing the results

  1. Navigate to the Model View tab to examine the blast overpressure exposure simulation.
  2. Click the Current button to load the completed simulation into the visualization window.
  3. Once the simulation is loaded, visualize the simulation using the Play button at the bottom of the screen.
    1. The user can choose between the different options in the navigation bar to play and pause the simulation at different speeds. The screenshot of the play controls below shows more options. The user can control the interactive window using the mouse. Right-click and drag to rotate the mode, the left button to translate the model, and the middle button to scroll in/out.
  4. After reviewing the simulation, navigate to the Blast Load Metrics tab by clicking on it.
  5. Click the Current button to load the overpressure plots at the virtual sensor locations.
    ​NOTE: This will load the overpressure information at the different sensor locations in different plots.
  6. Navigate to the drop-down Series to Display and select by Checking the Box for the Corresponding Sensor to plot the overpressure recorded at that virtual sensor (see Figure 4).

figure-protocol-8014
Figure 4: Plot controls for overpressures at different virtual sensors over time. Users can choose the series to display by checking or unchecking the different sensors. Please click here to view a larger version of this figure.

Representative Results

Automated reconstruction of virtual service members and blast scenes
Automated virtual service member body model and weapon training scene model generation was achieved through the BOP Tool automation capabilities. Figure 5 shows the virtual training scene generated from the image data. As can be observed here, the resultant scene was a good representation of the image data. The image used for demonstration in Figure 5 was obtained from Wikimedia Commons.

figure-representative results-644
Figure 5: Virtual weapon training scene from the image data. The left-hand side picture shows the image data corresponding to the firing of an AT4 weapon, and the right-hand side shows the virtual automatically generated weapon training scene. This figure has been obtained from Wikimedia Commons. Please click here to view a larger version of this figure.

Additionally, the approach was applied to reconstruct the M120 weapon firing scene. The image was collected by WRAIR as part of a pressure data collection effort for the M120 mortar weapon. Figure 6 below illustrates the reconstructed virtual weapon firing scene alongside the original image. A discrepancy was observed in the position of the assistant gunner in the virtual reconstruction. This can be corrected by adjusting the assistant gunner's position using the BOP GUI user options. Furthermore, the instructor's pelvic posture appeared inaccurate, likely due to obstruction from the stakes in the images. Integrating this approach with other depth imaging modalities would be beneficial in addressing these discrepancies.

figure-representative results-2109
Figure 6: Virtual weapon training scene from the image data. Left hand side picture shows the image data corresponding to the firing of an M120 weapon and the right-hand side shows the virtual automatically generated weapon training scene. Please click here to view a larger version of this figure.

Validation of the automated scene generation approach
The human body model generator employed in this study underwent validation using the ANSUR II human body scan database14, which included anthropometric measurements from medical imaging data. An automated reconstruction method, which utilized this human body model generator, underwent qualitative validation with the data at hand. This validation process involved comparing the reconstructed models with experimental data (images) by overlaying them. Figure 7 presents a comparison between the 3D avatar models and the experimental data. However, a more thorough validation of this method is necessary, which would necessitate additional experimental data from the scene, including precise positions, postures, and orientations of the different service members involved in a training scene.

figure-representative results-3661
Figure 7: Qualitative comparison of the virtual human body model generated with the image. The left panel shows the original image, middle panel shows the generated virtual body model, and the right panel shows the virtual model overlapped on the original image. Please click here to view a larger version of this figure.

Representative blast overpressure simulations
After setting up the blast scene, the authors were able to proceed to the critical phase of running blast overpressure (BOP) simulations. This will enable us to understand the distribution of blast loads on different service members involved in a weapon firing scene. Figure 8 presents the results of these BOP simulations during the AT4 weapon firing event. The simulations provide a detailed visualization of the overpressure loads on the virtual servicemember in the scene at different instants over time. In conclusion, the results demonstrate not just the feasibility but also the effectiveness of the protocol in creating accurate and analytically useful reconstructions of weapon training scenarios, thereby paving the way for more advanced studies in military training safety and efficiency.

figure-representative results-5212
Figure 8: Blast overpressure exposure on the shooter. (A, B, C, and D) The four panels show the model predicted blast overpressure on the virtual service members involved in M120 mortar firing at different time instants. The panels (C) and (D) show the overpressure propagation due to ground reflection. Please click here to view a larger version of this figure.

Discussion

The computational framework presented in this paper significantly accelerates the generation of blast weapon training scenes compared to previously used manual methods using visual appraisal. This approach demonstrates the framework's ability to rapidly capture and reconstruct diverse military postures.

Advantages of the current approach
The creation of virtual human body models in specific postures and positions is a challenging task, with limited tools available for this purpose. The conventional method employed in the legacy approach, specifically for the BOP tool, utilized CoBi-DYN14,15,16,17. This method involved manually creating virtual human body models with clothing, helmets, protective armor, and boots. The models were generated through approximate visual approximation, lacking a systematic approach. In the legacy BOP tool, CoBi-DYN was used to create a database of human body models that was accessible during the Scenario Configuration step. Users would manually select an approximate postural configuration and position them approximately for a particular weapon system to run the BOP scenario. Although the blast scene reconstruction from the existing service member database (accessed from the drop-down in the BOP Tool) was relatively fast, the initial creation of the virtual service member model database was time-consuming, taking around 16-24 h per scene due to the manual and approximate nature of the process. In contrast, the new approach leverages well-established pose estimation tools for more automated and rapid creation of virtual human body models. These tools automatically utilize video and image data to rapidly reconstruct blast scenes with virtual service member avatars. This significantly reduces the time required to create the human body database. The novel approach involves a single button click to generate or reconstruct the full virtual scene from imaging data without the need for additional software. The entire process now takes approximately 5-6 s per scene (after reading the image data, as will be demonstrated in the video), demonstrating a substantial improvement in efficiency over the legacy method. This method is not intended to replace the original approach but rather to complement it by speeding up the generation of virtual service member models (which can be added to the service member body model database in the future). It simplifies the addition of new configurations with varying complexities and facilitates the integration of new weapon systems in the future, thereby enhancing the extensibility of the BOP Tool. Compared to the legacy approach, it is evident that the presented method offers a more streamlined and automated solution, reducing manual effort and time while improving the accuracy and systematic nature of virtual human body model creation. This highlights the strength and innovation of the described method in the field.

Validation of automated scene generation
A qualitative validation of the approach was conducted by overlaying the reconstructed scenarios on the image data. However, quantitative validation was not feasible due to the lack of available data on positioning and posture for these images. The authors recognize the importance of a more thorough validation and plan to address this in future work. To achieve this, the authors envision a comprehensive data collection effort to obtain precise positioning and posture information. This will enable us to perform a detailed quantitative validation, ultimately enhancing the robustness and accuracy of the approach.

Validation of the Blast exposure simulations
The weapon blast kernels were developed and validated using pencil probe data collected during weapon firing. Further details on the weapon blast kernel and relevant exposures will be included in an upcoming publication. Some information in this regard is also available in the past publications11,12. This ongoing effort will contribute to improving the precision and effectiveness of the blast overpressure simulations, reinforcing the validity of the tool.

The approach presented here can also be utilized to generate virtual avatars of service members, which can subsequently be incorporated into the database for selection using the BOP tool. Although the current process does not automatically save the models to the database, the authors plan to include this feature in future versions of the BOP tool. Additionally, the authors possess in-house tools that allow for manual modification of posture once the automated posture and the 3D model have been generated from the image data. Currently, this capability exists independently and is not integrated into the BOP tool, as further development is required on the user interface. Nevertheless, this is a work in progress, and the authors intend to incorporate it into future versions of the BOP tool.

The blast load data could also be exported in the form of an ASCII text file, and further post-processing steps could be applied to investigate the blast dose patterns for different weapon systems in more detail. Currently, work is underway to develop advanced post-processing tools for output metrics such as impulse, intensity, and others that could help users understand and investigate more complex repeated blast loading during these scenarios. Furthermore, the tools are developed for computational efficiency and enable fast simulation speeds. Therefore, these tools allow us to run inverse optimization studies to determine the optimal posture and position during a weapon firing scenario. Such improvements enhance the applicability of the tool for optimizing the training protocols in different weapon training ranges. The blast dose estimations could also be used to develop more refined high-resolution macroscale computational models for different vulnerable anatomical regions such as the brain, lungs, and others18,19,20 and microscale injury models20,21. Here, the helmet and armor incorporated into the models are solely for visual representation and do not impact the blast dose in these simulations. This is due to the models being considered rigid structures, which means their inclusion does not modify or influence the results of the blast overpressure simulations.

This paper leverages an existing open-source pose estimation tool for estimating human pose in weapon training scenarios. Please note that the framework developed and discussed here is posing tool agnostic, i.e., the authors can replace an existing tool with a new tool as more advancements are being made. Overall, the testing demonstrated that while modern software tools are extremely powerful, image-based pose detection is a challenging task. However, there are several recommendations that can be used to enhance pose detection performance. These tools perform best when the person of interest is in clear view of the camera. While this is not always possible during weapons training exercises, considering this when placing the camera can improve pose detection results. Furthermore, service members performing training exercises are frequently wearing camouflaged fatigues that blend into their surroundings. This makes it harder for them to be detected by both the human eye and machine learning (ML) algorithms. Methods exist for enhancing the ability of ML algorithms to detect humans wearing camouflage23, but these are non-trivial to implement. Where possible, collecting images of weapons training in a location with a high-contrast background can improve pose detection.

Additionally, the conventional approach to estimating 3D quantities from images is to use photogrammetry techniques with multiple camera angles. Recording images/videos from multiple angles can also be used to improve pose estimation. Fusing pose estimates from multiple cameras is relatively straightforward24. Another photogrammetry technique that could improve results is to use a checkerboard or other object of known dimensions as a common reference point for each camera. A challenge with using multiple cameras is to time-synchronize them. Custom-trained machine learning models could be developed to detect characteristics such as a helmet instead of a face or to detect people wearing military-specific apparel and equipment (e.g., boots, vests, body armor). Existing pose estimation tools can be augmented using custom-trained ML models. Although this is time-consuming and tedious, it can significantly improve the performance of the pose estimation model.

In summary, this paper effectively outlines the various components of the enhanced blast overpressure framework. Here, the authors also recognize the potential for enhancing its applicability and efficacy through a more seamless integration and full automation of the pipeline. Elements such as the generation of a scaled 3D body mesh with the AMG tool are in the process of being automated to reduce manual user input. Currently, there is ongoing work to integrate these capabilities into the BOP tool SCENE module. As this technology develops, it will be made accessible to all DoD stakeholders and labs. Furthermore, ongoing work includes the characterization and validation of weapon kernels for additional weapon systems. This continued effort to refine and validate the methodologies ensures that the tools the authors develop remain at the forefront of technological advancement, contributing significantly to the safety and efficacy of military training exercises. Future publications will provide more details on these developments, contributing to the broader field of military training and safety.

Disclosures

The authors have nothing to disclose.

Acknowledgements

The research is funded by the DoD Blast Injury Research Coordinating office under the MTEC Project Call MTEC-22-02-MPAI-082. The authors also acknowledge the contribution of Hamid Gharahi for weapon blast kernels and Zhijian J Chen for developing the modeling capabilities for weapon firing blast overpressure simulations. The views, opinions, and/or findings expressed in this presentation are those of the authors and do not reflect the official policy or position of the Department of the Army or the Department of Defense.

Materials

NameCompanyCatalog NumberComments
Anthropometric Model Generator (AMG)CFD ResearchN/AFor generting 3D human body models with different anthropometric characteristics. The tool is DoD Open Source.
BOP ToolCFD ResearchN/AFor setting up blast scenes and overpressure simulations. The tool is DoD open source.
BOP Tool SCENE ModuleCFD ResearchN/AFor setting up blast scenes and overpressure simulations. The tool is DoD open source.
MediapipeGoogleVersion 0.9Open-source pose estimation library.
MMPoseOpenMMLabVersion 1.2Open-source pose estimation library.
OpenSimStanford UniversityVersion 4.4Open-source musculoskeletal modeling and simulation platform.
Python 3Anaconda IncVersion 3.8The open source Individual Edition containing Python 3.8 and preinstalled packages to perform video processing and connecting the pose estimation tools.

References

  1. LaValle, C. R., et al. Neurocognitive performance deficits related to immediate and acute blast overpressure exposure. Front Neurol. 10, (2019).
  2. Kamimori, G. H., et al. Longitudinal investigation of neurotrauma serum biomarkers, behavioral characterization, and brain imaging in soldiers following repeated low-level blast exposure (New Zealand Breacher Study). Military Med. 183 (suppl_1), 28-33 (2018).
  3. Wang, Z., et al. Acute and chronic molecular signatures and associated symptoms of blast exposure in military breachers. J Neurotrauma. 37 (10), 1221-1232 (2020).
  4. Gill, J., et al. Moderate blast exposure results in increased IL-6 and TNFα in peripheral blood. Brain Behavior Immunity. 65, 90-94 (2017).
  5. Carr, W., et al. Ubiquitin carboxy-terminal hydrolase-L1 as a serum neurotrauma biomarker for exposure to occupational low-Level blast. Front Neurol. 6, (2015).
  6. Boutté, A. M., et al. Brain-related proteins as serum biomarkers of acute, subconcussive blast overpressure exposure: A cohort study of military personnel. PLoS One. 14 (8), e0221036 (2019).
  7. Skotak, M., et al. Occupational blast wave exposure during multiday 0.50 caliber rifle course. Front Neurol. 10, (2019).
  8. Kamimori, G. H., Reilly, L. A., LaValle, C. R., Silva, U. B. O. D. Occupational overpressure exposure of breachers and military personnel. Shock Waves. 27 (6), 837-847 (2017).
  9. Misistia, A., et al. Sensor orientation and other factors which increase the blast overpressure reporting errors. PLoS One. 15 (10), e0240262 (2020).
  10. National Defense Authorization Act for Fiscal Year 2020. Wikipedia Available from: https://en.wikipedia.org/w/index.php?title=National_Defense_Authorization_Act_for_Fiscal_Year_2020&oldid=1183832580 (2023)
  11. Przekwas, A., et al. Fast-running tools for personalized monitoring of blast exposure in military training and operations. Military Med. 186 (Supplement_1), 529-536 (2021).
  12. Spencer, R. W., et al. Fiscal year 2018 National Defense Authorization Act, Section 734, Weapon systems line of inquiry: Overview and blast overpressure tool-A module for human body blast wave exposure for safer weapons training. Military Med. 188 (Supplement_6), 536-544 (2023).
  13. Delp, S. L., et al. OpenSim: open-source software to create and analyze dynamic simulations of movement. IEEE Trans Biomed Eng. 54 (11), 1940-1950 (2007).
  14. Zhou, X., Sun, K., Roos, P. E., Li, P., Corner, B. Anthropometry model generation based on ANSUR II database. Int J Digital Human. 1 (4), 321 (2016).
  15. Zhou, X., Przekwas, A. A fast and robust whole-body control algorithm for running. Int J Human Factors Modell Simulat. 2 (1-2), 127-148 (2011).
  16. Zhou, X., Whitley, P., Przekwas, A. A musculoskeletal fatigue model for prediction of aviator neck manoeuvring loadings. Int J Human Factors Modell Simulat. 4 (3-4), 191-219 (2014).
  17. Roos, P. E., Vasavada, A., Zheng, L., Zhou, X. Neck musculoskeletal model generation through anthropometric scaling. PLoS One. 15 (1), e0219954 (2020).
  18. Garimella, H. T., Kraft, R. H. Modeling the mechanics of axonal fiber tracts using the embedded finite element method. Int J Numer Method Biomed Eng. 33 (5), (2017).
  19. Garimella, H. T., Kraft, R. H., Przekwas, A. J. Do blast induced skull flexures result in axonal deformation. PLoS One. 13 (3), e0190881 (2018).
  20. Przekwas, A., et al. Biomechanics of blast TBI with time-resolved consecutive primary, secondary, and tertiary loads. Military Med. 184 (Suppl 1), 195-205 (2019).
  21. Gharahi, H., Garimella, H. T., Chen, Z. J., Gupta, R. K., Przekwas, A. Mathematical model of mechanobiology of acute and repeated synaptic injury and systemic biomarker kinetics. Front Cell Neurosci. 17, 1007062 (2023).
  22. Przekwas, A., Somayaji, M. R., Gupta, R. K. Synaptic mechanisms of blast-induced brain injury. Front Neurol. 7, 2 (2016).
  23. Liu, Y., Wang, C., Zhou, Y. Camouflaged people detection based on a semi-supervised search identification network. Def Technol. 21, 176-183 (2023).
  24. Real time 3D body pose estimation using MediaPipe. Available from: https://temugeb.github.io/python/computer_vision/2021/09/14/bodypose3d.html (2021)

Explore More Articles

Engineering

This article has been published

Video Coming Soon

JoVE Logo

個人情報保護方針

利用規約

一般データ保護規則

研究

教育

JoVEについて

Copyright © 2023 MyJoVE Corporation. All rights reserved