10.9K Views
•
26:43 min
•
July 29th, 2007
DOI :
July 29th, 2007
•Hi, my name is Kevin w from the Center from the Integrative Study of Animal Behavior here at Macquarie University in Sydney, Australia. In this video based article, I'm gonna be talking about the use of computer animations in animal behavior experiments. In particular, I'm gonna be talking about how we actually fabricate one of these models.
Now, animations are becoming widely popular today in our contemporary culture, but we don't really see a lot of it in terms of science or in science related research. However, early attempts to build animations for science really start off in some very basic processes, and these processes often involve slicing and scanning particular parts of an object or a particular specimen, or they've also used techniques similar to biological motion, such as point light fixtures in order to match certain particular parts on a body and match it to an animation. Furthermore, that if we wanted to make an animation, someone would have to do it from scratch.
Now, using animation has allowed us to study many things in animal behavior such as mating, courtship, and what I'll be looking at in particular here is communication or visual communication. Now, using animation is a lot more sophisticated than traditional means, such as live interactions or invasive methods such as surgery. So in this particular art article, I'm really gonna provide an overview of how we produce this particular model, and we're gonna look at how this model is scanned.
We're gonna look at how to add texture, the UV mapping bones, weight shading, how we actually capture the stimulus for rotoscoping, and then finally, how the process is completely rendered until we get a complete sequence. There are eight major Steps in which we can create the entire animation. The first step is to actually provide a 3D scan of the entire object.
This provides the basic shape of the object. Then we need to add the texture, which obviously gives it a more realistic feel, and this texture is then broken down into a UV map, which allows certain points of the texture to be placed exactly onto the object. Then we'll need to manipulate the object and then we add skegan, which are then created into bones.
Weight shading is then included to also give the object an overall balance perspective in the movement. We then need to capture particular stimuli in which we can model out the object's movement on. We then rotoscope these movements on top of the the images of which we've captured, and then finally, we must render out the sequences into a readable format to be used for Video playback, we acquired A Taxidermic specimen to be used as our model.
Here we use the Konica Minolta vi dash nine I to reproduce a 3D object. The Konica Minolta uses digital photography and provides a measurement of high accuracy by using a 3D algorithm to link photographic segments together. It produces the shape and dimensions of the model and converts the images into 3D Digital data.
3D scanning takes particular segments of an actual object and places them into an object simulated for computer animation. Now, this object is then built by taking these segments and placing them in the right positions. This then creates an object that we can manipulate in animation Software.
Here We provided a mock setup of how we photograph our object and then how we convert our object into a 3D animated model. The object is first photographed in a variety of angles, and these photographed images are put into the correct orientation, and this allows for the smoothing of connecting contours. This technique incorporates the use of photogrammetric systems, which is used to achieve high detail and high accuracy of the object.
This system uses both coated markers and dimension controlled scale bars to map the coordinates of reference markers. These coordinates constitute a 3D constellation that is used to accurately gauge the contours and distances between each photograph section. The data was collected using raindrop geomagic, and this was used to acquire a single polygon mesh of the morphological shape of The data.
In Order to create our animation, we chose to use a program called Light Wave 3D. While there are other available 3D animation programs, we chose to use LightWave because of its user-friendly interface and ability to read compatible output files. In addition, LightWave also comprises of two separate programs, the modeler and the layout.
The LightWave modeler program allows for the manipulation of the object by highlighting specific polygons for changes, creating layers to the object, adding color and texture, and creating skegan. LightWave layout creates scenes used to complete the Animation sequence. Modler is where the object characteristics are built.
It is here where we can add texture, UV mapping, initial skegan, which will turn to bones and also take care of weight. Shading modeler is a predecessor to using the light wave where the scenes are actually built, so it is here where all the object characteristics are initially installed Into the object. Light Wave Layout is a program in where you create the actual scene with the exception of the grid where the object will be placed Inside this X, Y, and Z plane, you have two other particular features.
You have the camera, which actually films the scene, and the camera itself can be placed at any angle in which you choose to view it. Then there are lights. You can use one or many lights at that as well, and the lights help to illuminate the scene as well as the object and allows you to create different aspects of Illumination.
Light wave Layout provides us with a number of different aspects in which we can look at the scene. The most number of aspects we can look at are four different perspectives. Now, this is the best way to look at as many different angles of your object within the scene before The final output.
In light wave Layout, there are three different rotational axes. The first is the X coordinate, which is the pitch. Secondly, the Y coordinate, which is the heading, and thirdly, the Zed coordinate, which is the bank.
These three different coordinates pertain to the movement in which we can manipulate not only the object, but also the cameras and the lights Within our scene. We first Selected a Jackie lizard similar to both mass and length of our taxidermic model. From here, we acquired the texture of the object by photographing the texture and patterns of this live Jackie Dragon.
This lizard was then photographed from various angles, such as frontal and orthogonal from various positions such as frontal, orthogonal, ventral, and dorsal, and of the various body parts such as whole animal, the head, body, tail, and limbs over a white sheet of paper. We then balanced this for the pure white RBG values In order to acquire The right texture, we took a live lizard and photographed it from several different angles. It was taken from three angles and also three different positions.
The three angles were orthogonal, dorsal, and ventral, and the three positions were anterior, central and posterior. We had used a Canon ES digital camera in order to photograph these lizards. The photographs were then imported into Adobe Photoshop where the larger pieces were separated from the actual background.
These pieces were then matched into RGB values and then were also white balanced so that there was no difference in color. We Created an Atlas UV map in order to superimpose the texture onto the object. This Atlas UV map was created in light wave modeler.
An Atlas UV map separates the object into fragments comprised of connecting polygons. Since the object was not a simple shape such as cube or cylinder, the Atlas UV map divides the object into several simpler planer surfaces without 90 degree angles. However, an Atlas UV map breaks up the object into several discontinuous segments of connected polygons.
So the Atlas UV map was then captured using a program called Grab in order to create a separate JPEG image. And then we were embedded this BA image as a background layer into Adobe Photoshop elements. By capturing a jpeg without resizing the image, we kept the same proportions that can be used to map areas on the Jackie Dragon to the object.
The various photographs of the Jackie Dragons were then fused together in Adobe Photoshop elements to create whole Jackie Dragons in several positions such as frontal, orthogonal, ventral, and dorsal polygons were then matched to the local area on the Jackie Dragon. And now in light wave modeler. Again, we highlighted these polygons on the Atlas UV map, which allowed us to identify the specific area on the Jackie Dragon.
This area was then cropped and superimposed onto the background Atlas UV map jpeg specific areas on the Jackie Dragon that were photographed with then cropped and superimposed onto these specific polygons. When all the photographic fragments were layered onto the Atlas UV map jpeg, the background was removed and a single TIF file was created. The TIF file was then imported back into light wave modeler and was assigned To the UV coordinates.
UV mapping is where we take segments that were once photographed from the live lizard and actually segment them and place them onto our animated lizard. And this is done in the light wave modeler program. Using the light wave modeler program, we use the UV Atlas map tool, which allows us to break up the object into several different segments.
By breaking it up into several different segments, we're able to use the texture that we acquired from the photographs and place them on top of these particular pieces. Unlike an object that may be plain or cylindrical objects that do not have 90 degree angles break up into several different segments. Here is a closeup Of some small polygon segments on our UV atlas map.
We can highlight these particular segments in order to see which particular polygons correspond to which particular body. Parts on the object Segments from the photographs taken at a light lizard were then partitioned and then placed on top of our pieces split apart. Using the UV atlas map, these segments were then matched and therefore overlay the texture on Top of our object.
Skegan and bones are embedded into the object which allow for the general movement and manipulation of the object.First. In light wave modeler skegan were embedded into the object and skegan act as placeholders for virtual bones to be created in light wave layout. In our object in particular, 61 bones were created in all.
First, a layer was opened in light wave modeler and the object can be viewed as a wire frame. Within this program, modeler then allows us to view multiple wire frame layers, which prevents us from accidentally highlighting or moving certain polygons while creating the skegan. In our model, we created an artificial spinal co that was created in order to act as cervical vertebrae from the neck down to the sacral vertebrae of the tip of the tail.
Skegan here recreated the skeleton of the actual Jackie Dragon. However, we only used one large skeleton gun for the head. We then created four limbs, which consisted of four skegan each, and then the skegan was fused as thoracic vertebrae, and then eventually the hind limbs were also fused to the pelvic girdle.
The skegan were then fused together to create a hierarchical system where the spinal column acted as a central foundation for all limb movements. After all, skegan were created, the object was then synchronized to light wave layout, and the skegan were converted to bones. Each bone, like the object itself in the layout mode Also has three planes of rotation.
Skegan that are our predecessor to bones. Skegan are initially created using the lightweight modeler. It is here where we install these ske guns to later be converted into bones using the light wave layout.
Ske guns are the initial process in which gives us the flexibility and manipulation in which we can change the object into different shapes and positions. First, in Light wave modeler, we can add ske guns, which help to manipulate our object. Now these skegan are set in the object as place markers to be converted Into bones.
In light wave Layout, we convert these skegan into bones. Here in this diagram, there's also a polygon mesh, which shows us exactly the dimensions and the number of polygons within our particular object within light wave layout as well. In the next scene, you'll see how these bones operate together in order to help manipulate the object Weight hitting provides objects with a flexible and restricted movement.
Weight maps have a general value that range from negative 100%to positive 100%in distribution of motion. So for instance, independent weight maps designated to specific areas of the object need to act antagonistically to allow smooth and realistic movement of the object. Weight value suggests that a greater deviation from 0%which is no effect, will produce a greater effect on the movement on the particular body.Part.
Weight shading of a particular area also affects the movement of the bones. However, failure to weight properly may incur retarded movement of the object in relation to bone movement, such as the bones might produ protrude from the object when the object movement is in the same general direction, or it may produce hyper movement, such as movement of the object may supersede the position of the bones in the General direction. Here in LightWave Modeler, we split up our perspective into a quad perspective.
This allows us to see antagonistic pairs of Weight shading. To show You an example up close of how weight shading occurs here, what we've done is first put a weight shade on the tail. By adding weight shades to a particular part of the object, we would need to add a counterweight shade in order to balance out the movement of the object.
Here we've added a counterweight shade on the head to balance out exaggerated movements that could be produced by the Tail. In order To begin rotoscoping, we first need to collect sequences in which we can model our motor patterns from. We first simulated male male interactions from captive individuals.
Males were placed in IPOing glass terrariums, and then were filmed independently for social displays. These sequences were then archived for other experiments and to be used in rotoscoping. We selected motor pattern sequences such as a tail flick, pushup, body rock, and slow arm wave from the captured digital video footage and exported these segments into image sequences, which is a series of consecutive jpeg files into apple QuickTime.
We Had initially filmed live animal interactions, which are required and saved as archival video footage in order to do stimulus capture. We had shown these archival lizard footage to actually a live lizard held in an enclosure. Responses by this live listed were then recorded using a digital camcorder, and this essentially became our sequences that we use for rotoscoping.
Rotoscoping is a technique where the model is superimposed onto a background image or series of images in which the object is intended to mimic on a frame by frame sequence. The light wave layout program is the medium where the scene is created for the animation sequence. In layout, we can control the environment in which to represent our animation by establishing parameters for light camera, object and background characteristics.
In layout. The stimulus is also used in the final scene that will only be captured when the material is within the final camera view. First, the first jpeg image is imported into the background of the camera view.
The object would then be manipulated using the motion parameters of the bones that are also superimposed in front of the background image. The frame is then key framed, which saves the position of the object and all the bones for that particular frame. The background image is then removed and replaced by a next, next consecutive picture.
In the image sequence, the object is once again manipulated into position and posture of the background image and the after the completion of each frame manipulation. Each frame is then key framed and when the scene is completed, the sequence then can then be exported into an image sequence or rented into one Complete sequence. In order to Demonstrate rotoscoping, which is the recreation of realistic movement based on video recorded sequences, we're gonna start by showing you what we normally use as the original background.
So here in this first sequence, you'll see the empty Persian, which the lizard normally perches on. Secondly, I'll show you the live lizard sequence that we'll use the rotoscope. And thirdly, you'll see the animated lizard sequence that is placed on top of the live lizard.
Here I'm showing you where the object is imported into light wave layout. As you can see, you can separate the layout into few different screens, and this gives you a better view in which to manipulate the object. The most important view, however, is the one on top, which is the camera view, and you can see the safe areas which are designated by the rectangular boxes around the lizard.
Whatever is seen or placed within this safe area will be recorded by the camera and eventually used to render. To make the scene. Rotoscoping Is the frame by frame manipulation of the object on top of background images.
So what we have done here as a step-by-step process is that we've had to export the image sequence into individual frames. We then use those individual frames and we place it on the background of our animated sequence. We then have to move our animated sequence to match the positions seen in the background.
So by matching it, frame by frame, we're able to recreate the movement that is actually done off a actual image Sequence. As I mentioned previously, we would need to import each sequence frame by frame in order to rotoscope our image. In this frame, we've imported the first sequence into the background, which allows us to see where our object stands in front of Our background image.
We Can then also provide a bone x-ray view and light wave layout, which allows us to see the bones through the texture of the object Here. By being able to see the bones through the texture of the object, we can manipulate the object to match the background sequence of each Particular image. We would Then import the next consecutive sequence in which we would want to rotoscope our image on top of.
And this is done again as a frame by frame sequence through the entire consecutive Sequence. Small sequences Can be rendered directly out of layout into different picture formats or di directly into movie sequences. All large sequences can be render out using render.
Farm Commander from Bruce Rain Render. Farm Commander or RFC allows all computers on a local area network system to increase rendering time by distributing jobs amongst link computers. In our laboratory, we have used four apple Mac G five dual processors, which includes eight threads to distribute the rendering.
So for instance, the processing of a sequence of 9, 000 frames, which is the equivalent to six minutes on pal DV standard, can be completed in 12 hours using a single G five processor and reduced to four hours when distributed across eight threads or 4G five dual processes. Using RFC for batch processing is efficient when there are no more than two large sequences. However, RFC will produce any number of individual graphic files.
However, we chose to render both our sequences, which were long and short Into individual JPEGs. So Just to demonstrate again, we have our original sequence here, and our original sequence is going to have a lizard ding, a standard pushup body rock display, which is used for social communication and aggressive interactions. And now we have our final sequence, our animated lizard, and this animated lizard is going to duplicate our pushup body rock that was seen in the initial lizard footage.
Компьютерные стимулы использования Джеки дракона в качестве модели.
0:05
Introduction
26:29
Applications for Using Video Animation
19:41
Rotoscoping
25:39
Comparing the Original Video to the Animation
3:01
3D Scanning of the Model
16:05
Weight Shading
7:40
Texture
18:08
Original Stimulus Capture
9:14
UV Mapping
24:24
Rendering the Sequence
4:56
Animation Program: Lightwave 3D
2:00
Overview of Building the Animation
26:39
Acknowledgements
10:15
Skelegons and Bones
19:28
Image Sequence
Похожие видео
11.4K Views
30.6K Views
47.5K Views
18.3K Views
17.7K Views
9.0K Views
14.0K Views
15.8K Views
5.5K Views
6.8K Views
Авторские права © 2025 MyJoVE Corporation. Все права защищены