A subscription to JoVE is required to view this content. Sign in or start your free trial.
This protocol describes how to build a small and versatile video camera, and how to use videos obtained from it to train a neural network to track the position of an animal inside operant conditioning chambers. This is a valuable complement to standard analyses of data logs obtained from operant conditioning tests.
Operant conditioning chambers are used to perform a wide range of behavioral tests in the field of neuroscience. The recorded data is typically based on the triggering of lever and nose-poke sensors present inside the chambers. While this provides a detailed view of when and how animals perform certain responses, it cannot be used to evaluate behaviors that do not trigger any sensors. As such, assessing how animals position themselves and move inside the chamber is rarely possible. To obtain this information, researchers generally have to record and analyze videos. Manufacturers of operant conditioning chambers can typically supply their customers with high-quality camera setups. However, these can be very costly and do not necessarily fit chambers from other manufacturers or other behavioral test setups. The current protocol describes how to build an inexpensive and versatile video camera using hobby electronics components. It further describes how to use the image analysis software package DeepLabCut to track the status of a strong light signal, as well as the position of a rat, in videos gathered from an operant conditioning chamber. The former is a great aid when selecting short segments of interest in videos that cover entire test sessions, and the latter enables analysis of parameters that cannot be obtained from the data logs produced by the operant chambers.
In the field of behavioral neuroscience, researchers commonly use operant conditioning chambers to assess a wide range of different cognitive and psychiatric features in rodents. While there are several different manufacturers of such systems, they typically share certain attributes and have an almost standardized design1,2,3. The chambers are generally square- or rectangle-shaped, with one wall that can be opened for placing animals inside, and one or two of the remaining walls containing components such as levers, nose-poke openings, reward trays, response wheels and lights of various kinds1,2,3. The lights and sensors present in the chambers are used to both control the test protocol and track the animals’ behaviors1,2,3,4,5. The typical operant conditioning systems allow for a very detailed analysis of how the animals interact with the different operanda and openings present in the chambers. In general, any occasions where sensors are triggered can be recorded by the system, and from this data users can obtain detailed log files describing what the animal did during specific steps of the test4,5. While this provides an extensive representation of an animal’s performance, it can only be used to describe behaviors that directly trigger one or more sensors4,5. As such, aspects related to how the animal positions itself and moves inside the chamber during different phases of the test are not well described6,7,8,9,10. This is unfortunate, as such information can be valuable for fully understanding the animal’s behavior. For example, it can be used to clarify why certain animals perform poorly on a given test6, to describe the strategies that animals might develop to handle difficult tasks6,7,8,9,10, or to appreciate the true complexity of supposedly simple behaviors11,12. To obtain such articulate information, researchers commonly turn to manual analysis of videos6,7,8,9,10,11.
When recording videos from operant conditioning chambers, the choice of camera is critical. The chambers are commonly located in isolation cubicles, with protocols frequently making use of steps where no visible light is shining3,6,7,8,9. Therefore, the use of infra-red (IR) illumination in combination with an IR-sensitive camera is necessary, as it allows visibility even in complete darkness. Further, the space available for placing a camera inside the isolation cubicle is often very limited, meaning that one benefits strongly from having small cameras that use lenses with a wide field of view (e.g., fish-eye lenses)9. While manufacturers of operant conditioning systems can often supply high-quality camera setups to their customers, these systems can be expensive and do not necessarily fit chambers from other manufacturers or setups for other behavioral tests. However, a notable benefit over using stand-alone video cameras is that these setups can often interface directly with the operant conditioning systems13,14. Through this, they can be set up to only record specific events rather than full test sessions, which can greatly aid in the analysis that follows.
The current protocol describes how to build an inexpensive and versatile video camera using hobby electronics components. The camera uses a fisheye lens, is sensitive to IR illumination and has a set of IR light emitting diodes (IR LEDs) attached to it. Moreover, it is built to have a flat and slim profile. Together, these aspects make it ideal for recording videos from most commercially available operant conditioning chambers as well as other behavioral test setups. The protocol further describes how to process videos obtained with the camera and how to use the software package DeepLabCut15,16 to aid in extracting video sequences of interest as well as tracking an animal’s movements therein. This partially circumvents the draw-back of using a stand-alone camera over the integrated solutions provided by operant manufacturers of conditioning systems, and offers a complement to manual scoring of behaviors.
Efforts have been made to write the protocol in a general format to highlight that the overall process can be adapted to videos from different operant conditioning tests. To illustrate certain key concepts, videos of rats performing the 5-choice serial reaction time test (5CSRTT)17 are used as examples.
All procedures that include animal handling have been approved by the Malmö-Lund Ethical committee for animal research.
1. Building the video camera
NOTE: A list of the components needed for building the camera is provided in the Table of Materials. Also refer to Figure 1, Figure 2, Figure 3, Figure 4, Figure 5.
2. Designing the operant conditioning protocol of interest
NOTE: To use DeepLabCut for tracking the protocol progression in videos recorded from operant chambers, the behavioral protocols need to be structured in specific ways, as explained below.
3. Recording videos of animals performing the behavioral test of interest
4. Analyzing videos with DeepLabCut
NOTE: DeepLabCut is a software package that allows users to define any object of interest in a set of video frames, and subsequently use these to train a neural network in tracking the objects’ positions in full-length videos15,16. This section gives a rough outline for how to use DeepLabCut to track the status of the protocol step indicator and the position of a rat’s head. Installation and use of DeepLabCut is well-described in other published protocols15,16. Each step can be done through specific Python commands or DeepLabCut’s graphic user interface, as described elsewhere15,16.
5. Obtaining coordinates for points of interest in the operant chambers
6. Identifying video segments where the protocol step indicator is active
7. Identifying video segments of interest
8. Analyzing the position and movements of an animal during specific video segments
Video camera performance
The representative results were gathered in operant conditioning chambers for rats with floor areas of 28.5 cm x 25.5 cm, and heights of 28.5 cm. With the fisheye lens attached, the camera captures the full floor area and large parts of the surrounding walls, when placed above the chamber (Figure 7A). As such, a good view can be obtained, even if the camera is placed off-center on the chamber’s top. This should hold ...
This protocol describes how to build an inexpensive and flexible video camera that can be used to record videos from operant conditioning chambers and other behavioral test setups. It further demonstrates how to use DeepLabCut to track a strong light signal within these videos, and how that can be used to aid in identifying brief video segments of interest in video files that cover full test sessions. Finally, it describes how to use the tracking of a rat’s head to complement the analysis of behaviors during operan...
While materials and resources from The Raspberry Pi foundation has been used and cited in this manuscript, the foundation was not actively involved in the preparation or use of equipment and data in this manuscript. The same is true for Pi-Supply. The authors have nothing to disclose.
This work was supported by grants from the Swedish Brain Foundation, the Swedish Parkinson Foundation, and the Swedish Government Funds for Clinical Research (M.A.C.), as well as the Wenner-Gren foundations (M.A.C, E.K.H.C), Åhlén foundation (M.A.C) and the foundation Blanceflor Boncompagni Ludovisi, née Bildt (S.F).
Name | Company | Catalog Number | Comments |
32 Gb micro SD card with New Our Of Box Software (NOOBS) preinstalled | The Pi hut (https://thpihut.com) | 32GB | |
330-Ohm resistor | The Pi hut (https://thpihut.com) | 100287 | This article is for a package with mixed resistors, where 330-ohm resistors are included. |
Camera module (Raspberry Pi NoIR camera v.2) | The Pi hut (https://thpihut.com) | 100004 | |
Camera ribbon cable (Raspberry Pi Zero camera cable stub) | The Pi hut (https://thpihut.com) | MMP-1294 | This is only needed if a Raspberry Pi zero is used. If another Raspberry Pi board is used, a suitable camera ribbon cable accompanies the camera component |
Colored LEDs | The Pi hut (https://thpihut.com) | ADA4203 | This article is for a package with mixed colors of LEDs. Any color can be used. |
Female-Female jumper cables | The Pi hut (https://thpihut.com) | ADA266 | |
IR LED module (Bright Pi) | Pi Supply (https://uk.pi-supply.com) | PIS-0027 | |
microcomputer motherboard (Raspberry Pi Zero board with presoldered headers) | The Pi hut (https://thpihut.com) | 102373 | Other Raspberry Pi boards can also be used, although the method for automatically starting the Python script only works with Raspberry Pi zero. If using other models, the python script needs to be started manually. |
Push button switch | The Pi hut (https://thpihut.com) | ADA367 | |
Raspberry Pi power supply cable | The Pi hut (https://thpihut.com) | 102032 | |
Raspberry Pi Zero case | The Pi hut (https://thpihut.com) | 102118 | |
Raspberry Pi, Mod my pi, camera stand with magnetic fish eye lens and magnetic metal ring attachment | The Pi hut (https://thpihut.com) | MMP-0310-KIT |
Request permission to reuse the text or figures of this JoVE article
Request PermissionThis article has been published
Video Coming Soon
Copyright © 2025 MyJoVE Corporation. All rights reserved