It enables capturing of high-quality 3D videos for training and education. It can be applied to most operating room settings with open surgery. With a robotized approach, we can quickly and accurately collect the data to evaluate the camera positions.
This method is used to rapidly test camera configuration for 3D visions applications. It can also be used to investigate how different subjects perceive depth. To begin mounting the camera to the robot, first, mount the lens to the camera, and then the camera to the adapter plate with three M2 screws.
Next, mount the circular mounting plate to the camera adapter plate with four M4 screws to the opposite side of the camera. Once all the cameras are mounted to the robot, observe the mirroring of the resulting assemblies. Mount the adapter plate to the robot wrist with four M2.5 screws.
For a table-mounted robot, attach the left camera to the right robot arm. Connect the USB cables to the cameras and the Ubuntu computer. On the robot touch display, press the Menu button and select Stereo 2 to start the robot application.
On the main screen, press Go to start for 1, 100 millimeters in the robot application, and wait for the robot to move to the start position. Remove the protective lens cap from the cameras. Place a printed calibration grid at 1, 100 millimeters from the camera sensors.
To correctly identify the corresponding squares, place a small screw nut or mark somewhere in enter of the grid. Start the recording application on the Ubuntu computer to start the interface. And then, adjust the aperture and focus on the lens with the aperture and focus rings.
In the recording application, check Crosshair to visualize the crosshairs. In the recording application, ensure that the crosshairs align with the calibration grid in the same position in both camera images. After pressing the gear icon in the robot application, position the cameras relative to the patient.
Change the X direction by pressing the plus or minus for Hand distance from robot, and Z direction by pressing the plus or minus for Height to capture the surgical field in the images. Change the Y direction by manually moving the robot or patient. After pausing the surgery and informing the OR personnel about starting the experiment, press Record in the recording application, and then press Run Experiment in the robot application.
After a while, when the experiment is finished, Done will appear on the touch display. Then, press Stop Recording in the recording application to stop recording. Inform the OR personnel that the experiment has finished and resume the surgery.
For evaluation, display the video in top-bottom 3D format with an active 3D projector. The correct evaluation video with a right image placed at the top in top-bottom stereoscopic 3D is shown. A successful sequence should be sharp, focused, and without unsynchronized image frames.
The unsynchronized video streams create blur. The convergence point should be centered horizontally, independent of the camera separation. With a very large separation between the right and left images, the brain cannot fuse the images into one 3D image.
Several reasons can cause failure in the center positioning of the heart, such as the distance from the convergence point to the heart. For correct configuration of the camera tool coordinate system, ensure that the crosshairs in the recording application point at the same object, the calibration object is in the center position, and the correct debayering format of colors. 3D videos with changing baseline distances enable us to study how depth information affects anatomical understanding among medical staff.