By following the methods outlined in this protocol researchers will be able to record and analyze videos of rodents performing complex behavioral tests in operant conditioning chambers. The protocol describes how to build inexpensive video camera and use it together with an open source tracking software. This is an attractive approach for labs on a budget.
The method is valuable for research projects that involve operant conditioning in rodents. As video analysis can greatly improve the understanding of behaviors seen in this type of tests. Begin by attaching the metal ring around the opening of the camera stand.
Then attach the camera module to the stand using the nuts and bolts that accompany the kit. Open the ribbon cable ports on the camera module and microcomputer by gently pooling on the edges of their plastic lips. Blaze the ribbon cable in the open port on the camera module so that the cables silver connectors face the circuit board.
Then lock it in place by pushing in the plastic clip. Repeat the process with a port on the microcomputer. Then attach the fisheye lens to the metal ring on the camera stand.
Place the microcomputer in the plastic case and insert the listed micro SD card. Then connect a monitor, keyboard and a mouse to the microcomputer and started by connecting its power supply. Open a terminal window and type sudo apt hyphen, get update.
Then press the enter key. Next type sudo apt full hyphen upgrade and press enter. Under the start menu, select preferences and raspberry PI configurations.
When the window opens, go to the interfaces tab and enable the camera and I2C Then click okay. Copy supplementary file one onto a USB memory stick. Then transfer it to the microcomputers home PI folder and rename it.
Open a terminal window type pseudo nano slash etc, slash RC dot local and press enter. Use the keyboards arrow keys to move the cursor down to the space between fi and exit zero. Then add text to make the computer start the copied script.
And the infrared LEDs, whenever it boots. Save the changes by pressing control and X followed by Y and enter. Next solder resistors and female jumper cables onto the legs of two colored LEDs.
Solder female jumper cables onto two button switches. Then connect the switches colored LEDs and the listed infrared led module to the computers GPI hoop ends. When connected properly, one led will indicate that the camera is switched on and ready to be used.
While the other indicates that the camera is recording a video. The button with the long cables is used to start and stop video recordings. While the button with these short cables is used to switch off the camera.
Set the protocol to use the operant chambers house light as an indicator of a specific step in the protocol. Then set the protocol to record all events of interest with timestamps in relation to when this protocol step indicator becomes active. Place the camera on top of the operant chambers and started by connecting it to an electrical outlet via the power supply cable.
Use the previously connected button to start and stop video recordings. When the video recordings are finished, connect the camera to a monitor, keyboard mouse and USB storage device and retrieve the video files from its desktop. Use DeeplabCuts frame grabbing function to extract 700 to 900 video frames from one or more of the recorded videos.
Make sure that the video frames you select display the animal in different postures, both stationary with the head outside and inside of openings and moving in different directions. Use the labeling toolbox to manually mark the position of the rat's head in each video frame by placing a head label in a central position between the rat's ears. Also label other body parts that may be of interest.
In addition, mark a position of the protocol step indicator in each video frame where it is actively shining. Next use the create training data set and train network functions to create a training data set from the labeled video frames and start the training of a neural network. When a neural network has been trained, use it to analyze the gathered videos.
This will create a CSV file listing the track positions of the rat's head, other body parts of interest and the protocol step indicator in each video frame. In addition, it will create marked up video files where the track positions are displayed visually. To obtain coordinates of specific points of interest inside the operant chambers, manually mark these as previously described and retrieve the coordinates from the CSV file that is automatically stored under labeled data in the project folder.
Note, in which video segments the protocol step indicator is tracked within 60 pixels of the position obtained manually in the previous section and extract the exact starting point for each period where the indicator is active. Use the points where the protocol step indicator becomes active and the timestamps recorded by the operant chambers. To determine which video segments cover specific events of the test protocol such as inter-trial intervals, responses or reward retrievals.
Note the video frames they cover any events that are specific interest. Finally perform relevant in-depth analysis of the animal's position and movements, during these events. The camera's fish eye lens should allow it to capture a full view of the inside of most rodent operant conditioning chambers.
By using a suitable source of infrared illumination, the camera will also allow video capture in complete darkness. A well-trained network should allow for over 90%accuracy when tracking an animal's head. Accurate tracking is clearly identifiable by markers following an animal throughout its movements and plotted paths appearing smooth.
In contrast, inaccurate tracking is characterized by markers that do not reliably stay on target and by jagged plotted paths. As a result of this inaccurate tracking typically causes sudden shifts in calculated movements speeds. By tracking where an animal is located throughout a test session, one can assess how distinct movement patterns relate to performance.
As an example, in the five choice serial reaction time test, head movements during the inter-trial interval, can be used to separate emission trials where the animal shows a limited interest in performing a response. From a mission trials where the animal simply fails to notice the brief light cue. In addition investigating head movements can enable the detection and characterization of different attentional strategies.
When attempting these procedure, it is important that the protocol step indicator is reliable. And that the neural network is trained with a dives ert of video frames. To ensure an accurate tracking.