To begin, arrange the equipment and install the software for the DeepLabCut or DLC procedure. To create the environment, navigate to the folder where the DLC software was downloaded. Use the change directory command, cd folder name.
Run the first command, conda env create f DEEPLABCUT.yaml. Then, type conda activate Deeplabcut to enable the environment. Then, open the graphical user interface using python m deeplabcut.
After the interface opens, click Create New Project at the bottom of the interface. Name the project for easy identification later. Enter a name for the experimenter and check the location section to verify where the project will be saved.
Select Browse Folders to locate the videos for training the model, and choose Copy videos to project folder if the videos should remain in their original directory. Click Create to generate a new project. After creating the model, select Edit config.
yaml, followed by Edit to open the configuration settings file. Modify the body parts to include all parts of the eye for tracking. Adjust number of frames to pick to obtain 400 total frames for training video.
Change dot size to six to ensure that the default label size is small enough for accurate placement around the edges of the eye. Following configuration, navigate to the Extract Frames tab of the graphical user interface and select Extract Frames at the bottom. Navigate to the Label Frames tab and select Label Frames.
In the new window, find folders for each of the selected training videos and choose the first folder to open a new labeling interface. Label the points defined during configuration for each frame of the selected video. After labeling all frames, save the labels and repeat the process for the next video.
For accurate labeling of squint, use two points near the largest peak of the eye. To create a training dataset, navigate to the Train Network tab and initiate train network. Once the network training is complete, navigate to and select Evaluate Network.
To analyze videos, navigate to the Analyze Videos tab and select Add more videos to choose the videos. Select Save results as CSV if A CSV output of the data is sufficient. Once all videos are selected, click Analyze Videos to start the analysis process.
Finally, apply the macros to convert the raw data into the format required for the analysis of Euclidean distance. The model accurately detected both non-squint and squint instances, marking the top and bottom eyelid points to compute Euclidean distances. Root mean square error values between manually-labeled and model-labeled points showed minimal variability after 300 frames, and the average likelihood values for correct point detection exceeded 0.95 when using 400 frames.
The confusion matrix showed a positive predictive value of 96.96%and a negative predictive value of 99.66%for squint detection.