A subscription to JoVE is required to view this content. Sign in or start your free trial.
* These authors contributed equally
This protocol provides a method for tracking automated eye squint in rodents over time in a manner compatible with time-locking to neurophysiological measures. This protocol is expected to be useful to researchers studying mechanisms of pain disorders such as migraine.
Spontaneous pain has been challenging to track in real time and quantify in a way that prevents human bias. This is especially true for metrics of head pain, as in disorders such as migraine. Eye squint has emerged as a continuous variable metric that can be measured over time and is effective for predicting pain states in such assays. This paper provides a protocol for the use of DeepLabCut (DLC) to automate and quantify eye squint (Euclidean distance between eyelids) in restrained mice with freely rotating head motions. This protocol enables unbiased quantification of eye squint to be paired with and compared directly against mechanistic measures such as neurophysiology. We provide an assessment of AI training parameters necessary for achieving success as defined by discriminating squint and non-squint periods. We demonstrate an ability to reliably track and differentiate squint in a CGRP-induced migraine-like phenotype at a sub second resolution.
Migraine is one of the most prevalent brain disorders worldwide, affecting more than one billion people1. Preclinical mouse models of migraine have emerged as an informative way to study the mechanisms of migraine as these studies can be more easily controlled than human studies, thus enabling causal study of migraine-related behavior2. Such models have demonstrated a strong and repeatable phenotypic response to migraine-inducing compounds, such as calcitonin-gene-related peptide (CGRP). The need for robust measurements of migraine-relevant behaviors in rodent models persists, especially those that may be coupled with mechanistic metrics such as imaging and electrophysiological approaches.
Migraine-like brain states have been phenotypically characterized by the presence of light aversion, paw allodynia, facial hyperalgesia to noxious stimuli, and facial grimace3. Such behaviors are measured by total time spent in light (light aversion) and paw or facial touch sensitivity thresholds (paw allodynia and facial hyperalgesia) and are restricted to a single readout over large periods of time (minutes or longer). Migraine-like behaviors can be elicited in animals by dosing with migraine-inducing compounds such as CGRP, mimicking symptoms experienced by human patients with migraine3 (i.e., demonstrating face validity). Such compounds also produce migraine symptoms when administered in humans, demonstrating the construct validity of these models4. Studies in which behavioral phenotypes were attenuated pharmacologically have led to discoveries related to the treatment of migraine and provide further substantiation of these models (i.e., demonstrating predictive validity)5,6.
For example, a monoclonal anti-CGRP antibody (ALD405) was shown to reduce light-aversive behavior5 and facial grimace in mice6 treated with CGRP, and other studies have demonstrated that CGRP antagonist drugs reduce nitrous oxide-induced migraine-like behaviors in animals7,8. Recent clinical trials have shown success in treating migraine by blocking CGRP9,10 leading to multiple FDA-approved drugs targeting CGRP or its receptor. Preclinical assessment of migraine-related phenotypes has led to breakthroughs in clinical findings and is, therefore, essential to understanding some of the more complex aspects of migraine that are difficult to directly test in humans.
Despite numerous advantages, experiments using these rodent behavioral readouts of migraine are often restricted in their time point sampling abilities and can be subjective and prone to human experimental error. Many behavioral assays are limited in the ability to capture activity at finer temporal resolutions, often making it difficult to capture more dynamic elements that occur at a sub-second timescale, such as at the level of brain activity. It has proven difficult to quantify the more spontaneous, naturally occurring elements of behavior over time at a meaningful temporal resolution for studying neurophysiological mechanisms. Creating a way to identify migraine-like activity at faster timescales would allow for externally validating migraine-like brain states. This, in turn, could be synchronized with brain activity to create more robust brain activity profiles of migraine.
One such migraine-related phenotype, facial grimace, is utilized across various contexts as a measurement of pain in animals that can be measured instantaneously and tracked over time11. Facial grimace is often used as an indicator of spontaneous pain based on the idea that humans (especially non-verbal humans) and other mammalian species display natural changes in facial expression when experiencing pain11. Studies measuring facial grimace as an indication of pain in mice in the last decade have utilized scales such as the Mouse Grimace Scale (MGS) to standardize the characterization of pain in rodents12. The facial expression variables of the MGS include orbital tightening (squint), nose bulge, cheek bulge, ear position, and whisker change. Even though the MGS has been shown to reliably characterize pain in animals13, it is notoriously subjective and relies on accurate scoring, which can vary across experimenters. Additionally, the MGS is limited in that it utilizes a non-continuous scale and lacks the temporal resolution needed to track naturally occurring behavior across time.
One way to combat this is by objectively quantifying a consistent facial feature. Squint is the most consistently trackable facial feature6. Squint accounts for the majority of the total variability in the data when accounting for all of the MGS variables (squint, nose bulge, cheek bulge, ear position, and whisker change)6. Because squint contributes most to the overall score obtained using the MGS and reliably tracks response to CGRP6,14, it is the most reliable way to track spontaneous pain in migraine mouse models. This makes squint a quantifiable non-homeostatic behavior induced by CGRP. Several labs have used facial expression features, including squint, to represent potential spontaneous pain associated with migraine6,15.
Several challenges have remained regarding carrying out automated squints in a way that can be coupled with mechanistic studies of migraine. For example, it has been difficult to reliably track squint without relying on a fixed position that must be calibrated in the same manner across sessions. Another challenge is the ability to carry out this type of analysis on a continuous scale instead of discrete scales like the MGS. To mitigate these challenges, we aimed to integrate machine learning, in the form of DeepLabCut (DLC), into our data analysis pipeline. DLC is a pose estimation machine learning model developed by Mathis and colleagues and has been applied to a wide range of behaviors16. Using their pose estimation software, we were able to train models that could accurately predict points on a mouse eye at near-human accuracy. This solves the issues of repetitive manual scoring while also drastically increasing temporal resolution. Further, by creating these models, we have made a repeatable means to score squint and estimate migraine-like brain activity over larger experimental groups. Here, we present the development and validation of this method for tracking squint behaviors in a way that can be time-locked to other mechanistic measurements such as neurophysiology. The overarching goal is to catalyze mechanistic studies requiring time-locked squint behaviors in rodent models.
NOTE: All animals utilized in these experiments were handled according to protocols approved by the Institutional Animal Care and Use Committee (IACUC) of the University of Iowa.
1. Prepare equipment for data collection
2. Setting up DLC
3. Create the model
4. Configure the settings
NOTE: This is where details like what points to track, how many frames to extract from each training video, default labeling dot size, and variables relating to how the model will train can be defined.
5. Extract training frames
6. Label training frames
7. Create a training dataset
8. Evaluate the network
9. Analyze data/generate labeled videos
10. Process final data
Here, we provide a method for the reliable detection of squint at high temporal resolution using DeepLabCut. We optimized training parameters, and we provide an evaluation of this method's strengths and weaknesses (Figure 1).
After training our models, we verified that they were able to correctly estimate the top and bottom points of the eyelid (Figure 2), which serve as the coordinate points for the Euclidean distance measure. Eu...
This protocol provides an easily accessible in-depth method for using machine-learning-based tools that can differentiate squint at near-human accuracy while maintaining the same (or better) temporal resolution of prior approaches. Primarily, it makes evaluation of automated squint more readily available to a wider audience. Our new method for evaluating automated squint has several improvements compared to previous models. First, it provides a more robust metric than ASM by utilizing fewer points that actually contribut...
We have no conflicts of interest to disclose. The views in this paper are not representative of the VA or The United States Government.
Thanks to Rajyashree Sen for insightful conversations. Thanks tothe McKnight Foundation Neurobiology of Disease Award (RH), NIH 1DP2MH126377-01 (RH), the Roy J. Carver Charitable Trust (RH), NINDS T32NS007124 (MJ), Ramon D. Buckley Graduate Student Award (MJ), and VA-ORD (RR&D) MERIT 1 I01 RX003523-0 (LS).
Name | Company | Catalog Number | Comments |
CUDA toolkit 11.8 | |||
cuDNN SDK 8.6.0 | |||
Intel computers with Windows 11, 13th gen | |||
LabFaceX 2D Eyelid Tracker Add-on Module for a Free Roaming Mouse: | FaceX LLC | NA | Any camera that can record an animal's eye is sufficient, but this is our eye tracking hardware. |
NVIDIA GPU driver that is version 450.80.02 or higher | |||
NVIDIA RTX A5500, 24 GB DDR6 | NVIDIA | [490-BHXV] | Any GPU that meets the minimum requirements specified for your version of DLC, currently 8 GB, is sufficient. We used NVIDIA GeForce RTX 3080 Ti GPU |
Python 3.9-3.11 | |||
TensorFlow version 2.10 |
Request permission to reuse the text or figures of this JoVE article
Request PermissionExplore More Articles
This article has been published
Video Coming Soon
Copyright © 2025 MyJoVE Corporation. All rights reserved