Aby wyświetlić tę treść, wymagana jest subskrypcja JoVE. Zaloguj się lub rozpocznij bezpłatny okres próbny.
Method Article
* Wspomniani autorzy wnieśli do projektu równy wkład.
Here we present a protocol to individually track animals over a long period of time. It uses computer vision methods to identify a set of manually constructed tags by using a group of lobsters as case study, simultaneously providing information on how to house, manipulate, and mark the lobsters.
We present a protocol related to a video-tracking technique based on the background subtraction and image thresholding that makes it possible to individually track cohoused animals. We tested the tracking routine with four cohoused Norway lobsters (Nephrops norvegicus) under light-darkness conditions for 5 days. The lobsters had been individually tagged. The experimental setup and the tracking techniques used are entirely based on the open source software. The comparison of the tracking output with a manual detection indicates that the lobsters were correctly detected 69% of the times. Among the correctly detected lobsters, their individual tags were correctly identified 89.5% of the times. Considering the frame rate used in the protocol and the movement rate of lobsters, the performance of the video tracking has a good quality, and the representative results support the validity of the protocol in producing valuable data for research needs (individual space occupancy or locomotor activity patterns). The protocol presented here can be easily customized and is, hence, transferable to other species where the individual tracking of specimens in a group can be valuable for answering research questions.
In the last few years, automated image-based tracking has provided highly accurate datasets which can be used to explore basic questions in ecology and behavior disciplines1. These datasets can be used for the quantitative analysis of animal behavior2,3. However, each image methodology used for tracking animals and behavior evaluation has its strengths and limitations. In image-based tracking protocols that use spatial information from previous frames in a movie to track animals4,5,6, errors can be introduced when the paths of two animals cross. These errors are generally irreversible and propagate through time. Despite computational advances that reduce or almost eliminate this problem5,7, these techniques still need homogeneous experimental environments for accurate animal identification and tracking.
The employment of marks that can be uniquely identified in animals avoids these errors and allows the long-term tracking of identified individuals. Widely used markers (e.g., barcodes and QR codes) exist in industry and commerce and can be identified using well-known computer vision techniques, such as augmented reality (e.g., ARTag8) and camera calibration (e.g., CALTag9). Tagged animals have previously been used for high-throughput behavioral studies in different animal species, for example, ants3 or bees10, but some of these previous systems are not optimized for recognizing isolated tags3.
The tracking protocol presented in this paper is especially suitable for tracking animals in one-channel imagery, such as infrared (IR) light or monochromatic light (particularly, we use blue light). Therefore, the method developed does not use color cues, being also applicable to other settings where there are constraints in the illumination. In addition, we use customized tags designed so as not to disturb the lobsters and, at the same time, allow recording with low-cost cameras. Moreover, the method used here is based on frame-independent tag detection (i.e., the algorithm recognizes the presence of each tag in the image regardless of the previous trajectories). This feature is relevant in applications where animals can be temporarily occluded, or animals' trajectories may intersect.
The tag design allows its use in different groups of animals. Once the parameters of the method are set, it could be transferred to tackle other animal-tracking problems without the need for training a specific classifier (other crustaceans or gastropods). The main limitations of exporting the protocol are the size of the tag and the need for attachment to the animal (which makes it not suitable for small insects, such as flies, bees, etc.) and the 2D assumption for the animal movement. This constraint is significant, given that the proposed method assumes the tag size remains constant. An animal moving freely in a 3D environment (e.g., fish) would show different tag sizes depending on its distance to the camera.
The purpose of this protocol is to provide a user-friendly methodology for tracking multiple tagged animals over a long period of time (i.e., days or weeks) in a 2D context. The methodological approach is based on the use of open source software and hardware. Free and open source software permits adaptations, modifications, and free redistribution; therefore, the generated software improves at each step11,12.
The protocol presented here focuses on a laboratory set up to track and evaluate the locomotor activity of four aquatic animals in a tank for 5 days. The video files are recorded from a 1 s time-lapse image and compiled in a video at 20 frames per second (1 recorded day occupies approximately 1 h of video). All video recordings are automatically postprocessed to obtain animal positions, applying computer vision methods and algorithms. The protocol allows obtaining large amounts of tracking data, avoiding their manual annotation, which has been shown to be time-intensive and laborious in previous experimental papers13.
We use the Norway lobster (Nephrops norvegicus) for the case study; thus, we provide species-specific laboratory conditions to maintain them. Lobsters perform well-studied burrow emergence rhythms that are under the control of the circadian clock14,15, and when cohoused, they form dominance hierarchy16,17. Hence, the model presented here is a good example for researchers interested in the social modulation of behavior with a specific focus on circadian rhythms.
The methodology presented here is easily reproduced and can be applied to other species if there is a possibility to distinguish between animals with individual tags. The minimum requirements for reproducing such an approach in the laboratory are (i) isothermal rooms for the experimental setup; (ii) a continuous water supply; (iii) water temperature control mechanisms; (iv) a light control system; (v) a USB camera and a standard computer.
In this protocol, we use Python18 and OpenCV19 (Open Source Computer Vision Library). We rely on fast and commonly applied operations (both in terms of implementation and execution), such as background subtraction20 and image thresholding21,22.
Access restricted. Please log in or start a trial to view this content.
The species used in this study is not an endangered or protected species. Sampling and laboratory experiments followed the Spanish legislation and internal institutional (ICM-CSIC) regulations regarding animal welfare. Animal sampling was conducted with the permission of the local authority (Regional Government of Catalonia).
1. Animal Maintenance and Sampling
NOTE: The following protocol is based on the assumption that researchers can sample N. norvegicus in the field during the night to avoid damage to the photoreceptors23. Exposure of N. norvegicus to sunlight must be avoided. After sampling, the lobsters are supposed to be housed in an acclimation facility similar to the one reported on previously17,24, with a continuous flow of refrigerated seawater (13 °C). The animals used in this study are male at the intermoult state with a cephalothorax length (CL; mean ± SD) of 43.92 ± 2.08 mm (N = 4).
Figure 1: Facility acclimation views. (a) Tank shelves. (a1) Seawater input. (a2) Fluorescent ceiling lights. (b) Detail of blue light illumination. (c) Animal cell detail. (d) Detail of an isolated facility control panel. (e) Temperature setting for one of the entrances. Please click here to view a larger version of this figure.
2. Tag's Construction
NOTE: The tag used here can be changed according to the characteristics of the target animal or other specific considerations.
Figure 2: The four tags used for the individual tagging of the lobsters. Circle, circle-hole, triangle, triangle-hole. Please click here to view a larger version of this figure.
3. Experimental Setup
NOTE: The experimental arena is supposed to be in an experimental chamber independent from but in close proximity to the acclimation facility.
Figure 3: Experimental setup. (a) Diagram of the assembly of the experimental tank and video acquisition. (b) General view of the experimental tank. (c) Bottom view of the experimental tank, indicating the artificial burrows. (d) Top view, showing the bottom of the experimental tank. (e) Detail of one of the burrow entrances. Please click here to view a larger version of this figure.
4. Experimental Trial and Animal Preparation
NOTE: All steps with animals must be done in the acclimation facility and under red light conditions according to the spectral sensitivity of the Norway lobster25. When moving the animals between the acclimation and the experimental facility, avoid any exposure of the lobsters to light, using an opaque black bag to cover the icebox.
Figure 4: Raw video frame. An example of a representative frame from one of the time-lapse videos collected during the experiments. At the upper right corner, we show the time stamp with the date, time, and frame. Notice the differences in the tank illumination in the image's lower corner. Please click here to view a larger version of this figure.
5. Video Analysis Script
6. Computer Vision Script for Video Analysis
NOTE: The script avoids fisheye image correction because it does not introduce a relevant error in the experimental setup. Nonetheless, it is possible to correct this with OpenCV camera calibration functions29 based on vector and matrix rotation methods30,31.
Figure 5: Relevant steps of the video-processing script. (1) Evaluate the background subtraction motion over the mean of the last 100 frames. (2) Result of the background subtraction algorithm. (3) Apply a dilate morphological operation to the white-detected areas. (4) Apply fix, static, main ROI; the yellow polygon corresponds to the bottom tank area. (5) Calculate contours for each white-detected region in the main ROI and perform a structural analysis for each detected contour. (6) Check structural property values and, then, select second-level ROI candidates. (7) Binarize the frame using an Otsu thresholding algorithm; the script works only with second-level ROIs. (8) For each binarized second-level ROI, calculate the contours of the white regions and perform a structural analysis for each detected contour. (9) Check the structural property values and, then selects internal ROI candidates. (10) For each contour in the internal ROI candidate, calculate the descriptors/moments. (11) Check if the detected shape matches with the model shape and approximate a polygon to the best match candidates. (12) Check the number of vertices of the approximate polygon and determine the geometric figure: circle or triangle. (13) Calculate the figure center and check if black pixels occur; if yes, it is a holed figure. (14) Visual result after frame analysis. Please click here to view a larger version of this figure.
Access restricted. Please log in or start a trial to view this content.
We manually constructed a subset of the experimental data to validate the automated video analysis. A sample size of 1,308 frames with a confidence level of 99% (which is a measure of security that shows whether the sample accurately reflects the population, within its margin of error) and a margin of error of 4% (which is a percentage that describes how close the response the sample gave is to the real value in the population) was randomly selected, and a manual annotation of the correct...
Access restricted. Please log in or start a trial to view this content.
The performance and representative results obtained with the video-tracking protocol confirmed its validity for applied research in the field of animal behavior, with a specific focus on social modulation and circadian rhythms of cohoused animals. The efficiency of animal detection (69%) and the accuracy of tag discrimination (89.5%) coupled with the behavioral characteristics (i.e., movement rate) of the target species used here suggest that this protocol is a perfect solution for long-term experimental trials (e.g., da...
Access restricted. Please log in or start a trial to view this content.
The authors have nothing to disclose.
The authors are grateful to the Dr. Joan B. Company that funded the publication of this work. Also, the authors are grateful to the technicians of the experimental aquarium zone at the Institute of Marine Sciences in Barcelona (ICM-CSIC) for their help during the experimental work.
This work was supported by the RITFIM project (CTM2010-16274; principal investigator: J. Aguzzi) founded by the Spanish Ministry of Science and Innovation (MICINN), and the TIN2015-66951-C2-2-R grant from the Spanish Ministry of Economy and Competitiveness.
Access restricted. Please log in or start a trial to view this content.
Name | Company | Catalog Number | Comments |
Tripod 475 | Manfrotto | A0673528 | Discontinued |
Articulated Arm 143 | Manfrotto | D0057824 | Discontinued |
Camera USB 2.0 uEye LE | iDS | UI-1545LE-M | https://en.ids-imaging.com/store/products/cameras/usb-2-0-cameras/ueye-le.html |
Fish Eye Len C-mount f = 6 mm/F1.4 | Infaimon | Standard Optical | https://www.infaimon.com/es/estandar-6mm |
Glass Fiber Tank 1500 x 700 x 300 mm3 | |||
Black Felt Fabric | |||
Wood Structure Tank | 5 Wood Strips 50x50x250 mm | ||
Wood Structure Felt Fabric | 10 Wood Strips 25x25x250 mm | ||
Stainless Steel Screws | As many as necessary for fix wood strips structures | ||
PC | 2-cores CPU, 4GB RAM, 1 GB Graphics, 500 GB HD | ||
External Storage HDD | 2 TB capacity desirable | ||
iSPY Sotfware for Windows PC | iSPY | https://www.ispyconnect.com/download.aspx | |
Zoneminder Software Linux PC | Zoneminder | https://zoneminder.com/ | |
OpenCV 2.4.13.6 Library | OpenCV | https://opencv.org/ | |
Python 2.4 | Python | https://www.python.org/ | |
Camping Icebox | |||
Plastic Tray | |||
Cyanocrylate Gel | To glue tag’s | ||
1 black PVC plastic sheet (1 mm thickness) | Tag's construction | ||
1 white PVC plastic sheet (1 mm thickness) | Tag's construction | ||
4 Tag’s Ø 40 mm | Maked with black & white PVC plastic sheet | ||
3 m Blue Strid Led Ligts (480 nm) | Waterproof as desirable | ||
3 m IR Strid Led Ligts (850 nm) | Waterproof as desirable | ||
6 m Methacrylate Pipes Ø 15 mm | Enclosed Strid Led | ||
4 PVC Elbow 45o Ø 63 mm | Burrow construction | ||
3 m Flexible PVC Pipe Ø 63 mm | Burrow construction | ||
4 PVC Screwcap Ø 63 mm | Burrow construction | ||
4 O-ring Ø 63 mm | Burrow construction | ||
4 Female PVC socket glue / thread Ø 63 mm | Burrow construction | ||
10 m DC 12V Electric Cable | Light Control Mechanism | ||
Ligt Power Supply DC 12 V 300 W | Light Control Mechanism | ||
MOSFET, RFD14N05L, N-Canal, 14 A, 50 V, 3-Pin, IPAK (TO-251) | RS Components | 325-7580 | Light Control Mechanism |
Diode, 1N4004-E3/54, 1A, 400V, DO-204AL, 2-Pines | RS Components | 628-9029 | Light Control Mechanism |
Fuse Holder | RS Components | 336-7851 | Light Control Mechanism |
2 Way Power Terminal 3.81 mm | RS Components | 220-4658 | Light Control Mechanism |
Capacitor 220 µF 200 V | RS Components | 440-6761 | Light Control Mechanism |
Resistance 2K2 7 W | RS Components | 485-3038 | Light Control Mechanism |
Fuse 6.3 x 32 mm2 3A | RS Components | 413-210 | Light Control Mechanism |
Arduino Uno Atmel Atmega 328 MCU board | RS Components | 715-4081 | Light Control Mechanism |
Prototipe Board CEM3,3 orific.,RE310S2 | RS Components | 728-8737 | Light Control Mechanism |
DC/DC converter,12 Vin,+/-5 Vout 100 mA 1 W | RS Components | 689-5179 | Light Control Mechanism |
2 SERA T8 blue moonlight fluorescent bulb 36 watts | SERA | Discontinued/Light isolated facility |
Access restricted. Please log in or start a trial to view this content.
Zapytaj o uprawnienia na użycie tekstu lub obrazów z tego artykułu JoVE
Zapytaj o uprawnieniaThis article has been published
Video Coming Soon
Copyright © 2025 MyJoVE Corporation. Wszelkie prawa zastrzeżone