959 Views
•
09:41 min
•
April 21st, 2023
DOI :
April 21st, 2023
•0:04
Introduction
1:02
Preparing the Motion Capture System
1:28
Calibrating the Cameras
2:02
Creating and Preparing Stimulus Object
3:10
Co‐Registering Real and Mesh Model Versions of the Stimulus Object
4:25
Setting up Markers on the Hands
5:08
Acquiring a Single Trial and Labeling the Markers
6:26
Reconstruct the Joint Skeletal Joint Poses
7:07
Generating Hand Mesh Reconstructions and Hand‐Object Contact Region Estimates
7:46
Results: Estimation of Contact Regions for Hand‐Object Interactions During Multi‐Digit Grasps
9:02
Conclusion
Transcript
Previous behavioral research on human grasping has been limited to highly constrained measurements in tightly controlled scenarios. Our protocol allows a much richer characterization of complex naturalistic grasping behavior. This technique uses detailed maps of hand object contact surfaces due to multi-digit grasping.
This allows us to investigate how humans grasp objects with an unprecedented level of sophistication. Accurate measurements of human grasping abilities are necessary to understand motor control, haptic perception, and human-computer interaction. These data can inform the design of robotic grippers and upper-limb prosthetics.
Demonstrating the procedure will be Kira Dehn, a graduate student completing her Master's thesis in my laboratory. To begin, position a work bench with a tracking volume imaged from multiple angles by motion tracking cameras arranged on a frame surrounding the workspace. Prepare reflective markers by attaching double-sided adhesive tape to the base of each marker.
Execute Qualisys Track Manager, or QTM, as an administrator. Place the L-shaped calibration object within the tracking volume. Within the QTM, click Calibrate in the Capture menu and wait for a calibration window to open.
Select the duration of the calibration and press Okay. Wave the calibration wand across the tracking volume for the duration of the calibration. Press the Export button and specify a file path to export the calibration as a text file.
Accept the calibration by pressing Okay. To create a stimulus object, construct a virtual 3D object model in the form of a polygon mesh. Use a 3D printer to construct a physical replica of the object model.
To prepare a stimulus object, attach four non-planar reflective markers to the surface of the real object. Place the object within the tracking volume. In the project repository, execute the indicated Python script.
Follow the instructions provided by the script to perform a one-second capture of the 3D position of the object markers. Select all the markers of the rigid body. Right-click and select Define Rigid Body, or 6DOF, then Current Frame.
Enter the name of the rigid body and press Okay. In the File menu, select Export to TSV. In the new window, check the 3D, 6D boxes, and Skeleton in the Data Type settings.
Check all the boxes in the General settings. Press Okay and then Save. Open Blender and navigate to the Scripting workspace.
Open the indicated file and press Run. Navigate to the Layout workspace and press N to toggle the sidebar. Within the sidebar, navigate to the Custom tab.
Select the obj file to be co-registered and press the Load Object button. Select the trajectory file exported earlier and specify the names of the markers attached to the rigid objects separated by semicolons. In the marker header, specify the line in the trajectory file containing the column names of the data.
Next, select the corresponding rigid body file with the 6D suffix and specify the name of the rigid body defined in the earlier step. In the 6D header, specify the line in the rigid body file containing the column names of the data. Press Load Markers, then translate and rotate the markers object or the object to align them.
Specify a mesh output file and press Run Co-Registration to output a obj file that contains the co-registered stimulus mesh. Attach 24 spherical reflective markers on different landmarks of a participant's hand using double-sided tape. Place the markers centrally on top of the fingertips in the distal interphalangeal joints, proximal interphalangeal joints, and metacarpophalangeal joints of the index finger, middle finger, ring finger, and small finger.
For the thumb, position one marker each on the fingertip and the basal carpal metacarpal joint and a pair of markers each on the metacarpophalangeal and interphalangeal joints. Finally, place markers at the center of the wrist and on the scaphotrapeziotrapezoidal joint. Ask the participant to place their hand flat on the work bench with the palm facing downward and to close their eyes.
Place the stimulus object on the workbench in front of the participant. While the QTM is running, execute the indicated Python script in the project repository. Ask the participant to open their eyes and follow the instructions provided by the script to capture a single trial of the participant grasping the stimulus object.
Within the QTM, drag and drop the individual marker trajectories from the unidentified trajectories to labeled trajectories and label them according to the naming convention. Select all the markers attached to the hand, right-click and select Generate AIM model from the Selection. In the new window, select Create a New Model Based on Marker Connections from the Existing AIM Model and press the Next button.
Select the RH_FH model definition and press Open. Press Next, enter a name for the AIM model, and press Okay. Finally, press Finish to create an AIM model for the participant's hand to automatically identify markers in successive trials from the same participant.
Within the QTM, open the project settings by pressing the gear wheel icon. In the sidebar, navigate to Skeleton Solver and press Load to select a skeleton definition file. Adjust the scale factor to 100%and press Apply.
Navigate to TSV Export and check the 3D, 6D, and Skeleton boxes in the Data Type settings. Check all the boxes in the General settings. Press Apply and close the project settings.
Press the Reprocess icon, then check the boxes Solve Skeletons and Export to TSV File and press Okay. Open a command window in the project repository and activate the conda environment by executing the indicated command. Then execute the indicated command and follow the instructions provided by the script to generate for each frame of the trial a hand-mesh reconstructing the current hand pose.
For hand object contact region estimates, execute the indicated command and follow the instructions provided by the script to generate hand and object contact region estimates by computing the intersection between the hand and object meshes. In this study, the dynamics of the grasp were recorded using 24 spherical reflective markers attached to different landmarks of the hand. Modifications to the pre-trained deep hand-mesh decoder is shown here.
First, as the network is not trained on specific participants, the generic ID-dependent mesh corrective provided with the pre-trained model is employed. Further, the ID-dependent skeleton corrective is derived using the QTM Skeleton Solver. Proportional scaling of the hand with the skeleton length is assumed and the mesh thickness is uniformly scaled by a factor derived from the relative scaling of the skeleton.
The final 3D hand-mesh reconstruction of the current hand pose in the same coordinate frame as the 3D-tracked object mesh is shown. A video of a hand with tracked points and co-registered mesh all moving side by side during a single grasp to a 3D printed cat figurine is shown. A single frame at the time of hand-to-object contact from a grasp to a 3D-printed croissant, together with the hand object mesh reconstructions and the estimated contact regions on the surface of the croissant.
The object and the markers attached to it need to be properly co-registered. Aligning them thoroughly is important because deviations can have large impact on contact region estimates. In addition to contact surfaces, the procedure provides joint Euler angles for each finger joint.
These can be used to study how hand poses during multi-digit grasps unfold over time.
When we grasp an object, multiple regions of the fingers and hand typically make contact with the object's surface. Reconstructing such contact regions is challenging. Here, we present a method for approximately estimating the contact regions by combining marker-based motion capture with existing deep learning-based hand mesh reconstruction.
ABOUT JoVE
Copyright © 2024 MyJoVE Corporation. All rights reserved