The overall goal of this procedure is to capture high resolution 3D video at or above realtime speed. This is accomplished by first projecting sinusoidal fringe pattern images onto the subject at high speed using a digital light processing projector. Three shifted cosign patterns are projected in sequence to achieve high accuracy.
A camera is used to capture these images from another viewing angle. The second step is to compute the wrapped phase from each set of three fringe pattern images. This is accomplished using the arc tangent function and the image intensity values.
Next, the phases unwrapped to remove the two pi discontinuities that result from the arc tangent function. The final step is to retrieve the depth from the unwrapped phase of the subject. This is the difference between the unwrapped phase maps of the subject and the calibration plane appropriately scaled and translated by constants found using a reference object.
Ultimately, the resulting data frames can be displayed using graphics software. The main advantage of this technique over other existing methods like laser scanning, is that it is capable of both high resolution and high speed. Because known sinusoidal patterns are projected onto the subject, A 3D data point can be retrieved for each pixel of the camera used with a 5 76 by 5 76 camera.
We can retrieve over 300, 000 3D data points per frame. Though this method has potential medical applications such as capturing the formation of facial expressions or the beating surface of a heart, it can also be applied to numerous other areas of study. It enables high resolution facial motion capture for use in movies and video games, or an enhanced method of video conferencing.
It could also be used to detect defects in a manufacturing environment. Visual demonstration of this method is critical as a calibration. Processing steps are difficult to learn because of the visual age of the system and its measurements.
The simplest and the easiest way to detect problems is with a trained visual examination. The first step is to generate the fringe patterns that will be projected. These are prepared in advance using an image programming environment here, matlab.
This video will focus on the use of binary patterns. To produce a defocused binary pattern, use a dithering technique to generate sinusoidal patterns using only pure black and pure white pixels. Make three images of the pattern shifted in phase from each other by two pi over three, as called for by the three step phase shifting algorithm.
In this demonstration, two additional sets of three are produced for the multi frequency technique, which can capture sharper changes in depth. Next, select a high speed digital light processing projector with a monochromatic setting. Ease the software provided with the projector to upload the images for phase shifting.
Now, choose a black and white C, CD or COS camera with the correct capture rate for the system. Keep in mind that the camera will need to capture the entire set of fringe images for each video frame to find the distance at which the projector should be placed from the object. Move the projector relative to a large flat surface when the vertical and horizontal extent of the image is slightly larger than the object to be studied.
Measure the distance of the projector to the wall. Use the desired field of view at this distance and the camera sensor size to find the focal length of the lens. The last configuration step is to determine the angular separation between the projector and the camera at a large angle between these components.
Triangulation between feature points is obvious, but more features get lost in shadow. At a small angle, triangulation becomes difficult increasing noise in the results. Typically, 10 to 15 degrees is a good compromise.
It is best to perform calibration just before data capture. For a binary defocusing system, defocus the projection lens until the patterns at the imaging plane resemble high quality sinusoids. This might require an iterative process of examining test data and adjusting the lens.
If the fringes blur together, the projector is too defocused. If dots are visible within the pattern, the projector is too focused. Now, place a flat whiteboard in the fields of view of both the camera and the projector.
Project the first of the fringe images on the board. Then capture it with the camera project and record the remaining fringe images. In the same way, save these fringe images for the data processing step, labeling them as the calibration plane.
Next place an object of known dimensions in the system's field of view. Here, a rigid foam cube covered with squares of diffuse adhesive foam is used. Project the same series of fringe images onto the cube.
Capturing each with the camera. Save the captured images for the processing step, labeling them as the calibration cube. To collect data.
Position the subject at the focal plane of the camera project fringe images onto the subject and capture them. High speed is typically required for correct motion capture at high speed. The human eye can only see the fringes.
In temporal interference. Use the captured images to assist with adjustments to the camera aperture. To optimize the light level, the fringe images should be as bright as possible, but not saturated.
The next step is post-processing of the data. In the three step phase shifting algorithm, the phase is the argument of the cosign function that determines the position of a point within the sinusoidal pattern. An algorithm has been implemented to determine this phase at each point from the fringe images, this computed wrapped phase is in the interval.
Negative PI to PI apply this algorithm to the calibration plane and cube and the subject data. Then unwrap the phase maps using another algorithm to add or subtract two pi at phase jumps In the multi frequency technique, the wrapped phase maps for each frequency are combined to yield a single unwrapped phase map, at this point, it is important to revisit the calibration step. Take a horizontal cross section from the center of the phase map of the calibration plane.
Remove its bulk profile to obtain a phase error estimate. If the projected pattern was too focused, the error will be large. Adjust the projector lens as needed to obtain error within the range.
Negative 0.1 to 0.1 radians. Next, a third algorithm calculates the depth of the calibration cube. This is the difference between the calibration cube and reference plane phase maps.
From this, a scale factor is determined. The depth of the subject is found by subtracting the phase map of the reference plane from that of the subject and applying the scale factor. The data can now be saved for visualization in MATLAB or other 3D graphic software.
The technique allows real time to high speed three dimensional imaging of a human face at a resolution high enough to reveal fine details. The set of three images on the left is the full face displayed in 2D, texture, overlay, shading, and lighting and wire frame modes. In the center is a wire frame view closeup of the nose area.
Note the density of points at right is a closeup view of the region around the eye. These images were produced using sinusoidal fringe patterns. Shown here is a 3D video of the formation of a smile.
The video was captured at 60 hertz with a resolution of 640 by 480 sinusoidal fringe patterns were used. It is possible to do live 3D video, capturing, processing, and rendering. In this video, the 3D measurements are displayed at 30 hertz on the computer screen.
As a last example of the capabilities of this method, this shows the 3D video imaging of a live rabbit heart. Using binary defocusing, the heart rate was approximately 200 beats per minute. The 3D capture rate was 166 hertz with a resolution of 576 by 576.
A high speed was necessary to prevent motion artifacts. Once master the calibration, data capture and data processing can be done in a few hours if performed properly. With processing software designed for speed, many, many processor results can be displayed on computer screen in real time After its development.
This technique paved the way for researchers in the field of cardiac surface mechanics to investigate the dynamic surface geometry of a beating rabbit heart using high resolution 3D video data. After watching this video, you should have a basic understanding of how to design and operate high resolution, high speed 3D video system. In particular, you should be familiar with the concepts behind digital fringe projection with the focused binary patterns and the reference plane calibration method.
You should also be able to recognize the difference between good and bad unwrapped phase maps.