Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
39 result(s) for "DeepLabCut"
Sort by:
Establishment and preliminary application of object recognition system based on DeepLabCut
This study aimed to develop a DeepLabCut (DLC)-based object recognition analysis system for assessing rodent cognitive function and validate its application in natural aging and elderly periodontitis mouse models. The system’s hardware was constructed with a custom arena and high-definition industrial camera, and the DLC deep learning algorithm was trained to track five mouse body landmarks, enabling automatic quantification of 36 indicators across three categories: sniffing frequency, exploration duration, and novelty preference. The system subdivided exploratory behaviors by calibrating nose tip and body center, and set dynamic distance thresholds (1 cm, 1.5 cm, 2 cm) for the nose tip to capture fine-grained exploration. In the novel object recognition (NOR) and object location recognition (OLR) paradigms, traditional visual inspection failed to detect significant cognitive differences between young and aged mice, while the DLC system identified marked reductions in aged mice in the frequency and duration of body center and combined nose tip-body center exploration of the new object (2 cm away from the object), as well as corresponding novelty preference indices. In the elderly periodontitis models, traditional metrics showed increased nose tip exploration of the old object (2 cm away from the object) and reduced novelty preference in model mice; the DLC system further detected significantly elevated nose tip exploration frequency toward the old object (1.5 cm away from the object), accompanied by decreased frequency preference for exploration (1 cm away from the object). Collectively, this DLC-based system achieves sensitive, precise, and multidimensional quantification of mouse exploratory behavior, effectively distinguishing cognitive characteristics of aged and disease model mice. By overcoming the limitations of traditional methods, it captures subtle cognitive changes in aging and periodontitis models, screens key indicators for cognitive decline, and provides comprehensive behavioral evidence for elucidating the neural mechanisms underlying aging- and inflammation-associated cognitive impairment.
Applications and limitations of current markerless motion capture methods for clinical gait biomechanics
Markerless motion capture has the potential to perform movement analysis with reduced data collection and processing time compared to marker-based methods. This technology is now starting to be applied for clinical and rehabilitation applications and therefore it is crucial that users of these systems understand both their potential and limitations. This literature review aims to provide a comprehensive overview of the current state of markerless motion capture for both single camera and multi-camera systems. Additionally, this review explores how practical applications of markerless technology are being used in clinical and rehabilitation settings, and examines the future challenges and directions markerless research must explore to facilitate full integration of this technology within clinical biomechanics. A scoping review is needed to examine this emerging broad body of literature and determine where gaps in knowledge exist, this is key to developing motion capture methods that are cost effective and practically relevant to clinicians, coaches and researchers around the world. Literature searches were performed to examine studies that report accuracy of markerless motion capture methods, explore current practical applications of markerless motion capture methods in clinical biomechanics and identify gaps in our knowledge that are relevant to future developments in this area. Markerless methods increase motion capture data versatility, enabling datasets to be re-analyzed using updated pose estimation algorithms and may even provide clinicians with the capability to collect data while patients are wearing normal clothing. While markerless temporospatial measures generally appear to be equivalent to marker-based motion capture, joint center locations and joint angles are not yet sufficiently accurate for clinical applications. Pose estimation algorithms are approaching similar error rates of marker-based motion capture, however, without comparison to a gold standard, such as bi-planar videoradiography, the true accuracy of markerless systems remains unknown. Current open-source pose estimation algorithms were never designed for biomechanical applications, therefore, datasets on which they have been trained are inconsistently and inaccurately labelled. Improvements to labelling of open-source training data, as well as assessment of markerless accuracy against gold standard methods will be vital next steps in the development of this technology.
Real-time, low-latency closed-loop feedback using markerless posture tracking
The ability to control a behavioral task or stimulate neural activity based on animal behavior in real-time is an important tool for experimental neuroscientists. Ideally, such tools are noninvasive, low-latency, and provide interfaces to trigger external hardware based on posture. Recent advances in pose estimation with deep learning allows researchers to train deep neural networks to accurately quantify a wide variety of animal behaviors. Here, we provide a new DeepLabCut-Live! package that achieves low-latency real-time pose estimation (within 15 ms, >100 FPS), with an additional forward-prediction module that achieves zero-latency feedback, and a dynamic-cropping mode that allows for higher inference speeds. We also provide three options for using this tool with ease: (1) a stand-alone GUI (called DLC-Live! GUI), and integration into (2) Bonsai, and (3) AutoPilot. Lastly, we benchmarked performance on a wide range of systems so that experimentalists can easily decide what hardware is required for their needs.
Moving outside the lab: Markerless motion capture accurately quantifies sagittal plane kinematics during the vertical jump
Markerless motion capture using deep learning approaches have potential to revolutionize the field of biomechanics by allowing researchers to collect data outside of the laboratory environment, yet there remain questions regarding the accuracy and ease of use of these approaches. The purpose of this study was to apply a markerless motion capture approach to extract lower limb angles in the sagittal plane during the vertical jump and to evaluate agreement between the custom trained model and gold standard motion capture. We performed this study using a large open source data set (N = 84) that included synchronized commercial video and gold standard motion capture. We split these data into a training set for model development (n = 69) and test set to evaluate capture performance relative to gold standard motion capture using coefficient of multiple correlations (CMC) (n = 15). We found very strong agreement between the custom trained markerless approach and marker-based motion capture within the test set across the entire movement (CMC > 0.991, RMSE < 3.22°), with at least strong CMC values across all trials for the hip (0.853 ± 0.23), knee (0.963 ± 0.471), and ankle (0.970 ± 0.055). The strong agreement between markerless and marker-based motion capture provides evidence that markerless motion capture is a viable tool to extend data collection to outside of the laboratory. As biomechanical research struggles with representative sampling practices, markerless motion capture has potential to transform biomechanical research away from traditional laboratory settings into venues convenient to populations that are under sampled without sacrificing measurement fidelity.
A Deep Learning-Based Approach to Video-Based Eye Tracking for Human Psychophysics
Real-time gaze tracking provides crucial input to psychophysics studies and neuromarketing applications. Many of the modern eye-tracking solutions are expensive mainly due to the high-end processing hardware specialized for processing infrared-camera pictures. Here, we introduce a deep learning-based approach which uses the video frames of low-cost web cameras. Using DeepLabCut (DLC), an open-source toolbox for extracting points of interest from videos, we obtained facial landmarks critical to gaze location and estimated the point of gaze on a computer screen via a shallow neural network. Tested for three extreme poses, this architecture reached a median error of about one degree of visual angle. Our results contribute to the growing field of deep-learning approaches to eye-tracking, laying the foundation for further investigation by researchers in psychophysics or neuromarketing.
Comprehensive ethological analysis of fear expression in rats using DeepLabCut and SimBA machine learning model
Defensive responses to threat-associated cues are commonly evaluated using conditioned freezing or suppression of operant responding. However, rats display a broad range of behaviors and shift their defensive behaviors based on immediacy of threats and context. This study aimed to systematically quantify the defensive behaviors that are triggered in response to threat-associated cues and assess whether they can accurately be identified using DeepLabCut in conjunction with SimBA. We evaluated behavioral responses to fear using the auditory fear conditioning paradigm. Observable behaviors triggered by threat-associated cues were manually scored using Ethovision XT. Subsequently, we investigated the effects of diazepam (0, 0.3, or 1 mg/kg), administered intraperitoneally before fear memory testing, to assess its anxiolytic impact on these behaviors. We then developed a DeepLabCut + SimBA workflow for ethological analysis employing a series of machine learning models. The accuracy of behavior classifications generated by this pipeline was evaluated by comparing its output scores to the manually annotated scores. Our findings show that, besides conditioned suppression and freezing, rats exhibit heightened risk assessment behaviors, including sniffing, rearing, free-air whisking, and head scanning. We observed that diazepam dose-dependently mitigates these risk-assessment behaviors in both sexes, suggesting a good predictive validity of our readouts. With adequate amount of training data (approximately > 30,000 frames containing such behavior), DeepLabCut + SimBA workflow yields high accuracy with a reasonable transferability to classify well-represented behaviors in a different experimental condition. We also found that maintaining the same condition between training and evaluation data sets is recommended while developing DeepLabCut + SimBA workflow to achieve the highest accuracy. Our findings suggest that an ethological analysis can be used to assess fear learning. With the application of DeepLabCut and SimBA, this approach provides an alternative method to decode ongoing defensive behaviors in both male and female rats for further investigation of fear-related neurobiological underpinnings.
Deep learning-based behavioral profiling of rodent stroke recovery
Background Stroke research heavily relies on rodent behavior when assessing underlying disease mechanisms and treatment efficacy. Although functional motor recovery is considered the primary targeted outcome, tests in rodents are still poorly reproducible and often unsuitable for unraveling the complex behavior after injury. Results Here, we provide a comprehensive 3D gait analysis of mice after focal cerebral ischemia based on the new deep learning-based software (DeepLabCut, DLC) that only requires basic behavioral equipment. We demonstrate a high precision 3D tracking of 10 body parts (including all relevant joints and reference landmarks) in several mouse strains. Building on this rigor motion tracking, a comprehensive post-analysis (with >100 parameters) unveils biologically relevant differences in locomotor profiles after a stroke over a time course of 3 weeks. We further refine the widely used ladder rung test using deep learning and compare its performance to human annotators. The generated DLC-assisted tests were then benchmarked to five widely used conventional behavioral set-ups (neurological scoring, rotarod, ladder rung walk, cylinder test, and single-pellet grasping) regarding sensitivity, accuracy, time use, and costs. Conclusions We conclude that deep learning-based motion tracking with comprehensive post-analysis provides accurate and sensitive data to describe the complex recovery of rodents following a stroke. The experimental set-up and analysis can also benefit a range of other neurological injuries that affect locomotion.
Quantifying social distance using deep learning-based video analysis: results from the BTBR mouse model of autism
Autism spectrum disorder (ASD) is characterized by challenges in social communication, difficulties in understanding social cues, a tendency to perform repetitive behaviors, and restricted interests. BTBR T + Itpr3 tf /J (BTBR) mice exhibit ASD-like behavior and are often used to study the biological basis of ASD. Social behavior in BTBR mice is typically scored manually by experimenters, which limits the precision and accuracy of behavioral quantification. Recent advancements in deep learning-based tools for machine vision, such as DeepLabCut (DLC), enable automated tracking of individual mice housed in social groups. Here, we used DLC to measure locomotion and social distance in pairs of familiar mice. We quantified social distance by finding the Euclidean distance between pairs of tracked mice. BTBR mice showed hyperlocomotion and greater social distance than CBA control mice. BTBR social distance was consistently greater than CBA control mice across the duration of a 60-min experiment. Despite exhibiting greater social distance, BTBR mice showed comparable socio-spatial arrangements of heads, bodies, and tails compared to CBA control mice. We also found that age, sex, and body size may affect social distance. Our findings demonstrate that DeepLabCut facilitates the quantification of social distance in BTBR mice, providing a complementary tool for existing behavioral assays.
BehaviorDEPOT is a simple, flexible tool for automated behavioral detection based on markerless pose tracking
Quantitative descriptions of animal behavior are essential to study the neural substrates of cognitive and emotional processes. Analyses of naturalistic behaviors are often performed by hand or with expensive, inflexible commercial software. Recently, machine learning methods for markerless pose estimation enabled automated tracking of freely moving animals, including in labs with limited coding expertise. However, classifying specific behaviors based on pose data requires additional computational analyses and remains a significant challenge for many groups. We developed BehaviorDEPOT (DEcoding behavior based on POsitional Tracking), a simple, flexible software program that can detect behavior from video timeseries and can analyze the results of experimental assays. BehaviorDEPOT calculates kinematic and postural statistics from keypoint tracking data and creates heuristics that reliably detect behaviors. It requires no programming experience and is applicable to a wide range of behaviors and experimental designs. We provide several hard-coded heuristics. Our freezing detection heuristic achieves above 90% accuracy in videos of mice and rats, including those wearing tethered head-mounts. BehaviorDEPOT also helps researchers develop their own heuristics and incorporate them into the software’s graphical interface. Behavioral data is stored framewise for easy alignment with neural data. We demonstrate the immediate utility and flexibility of BehaviorDEPOT using popular assays including fear conditioning, decision-making in a T-maze, open field, elevated plus maze, and novel object exploration.
Markerless motion analysis to assess reaching-sideways in individuals with dyskinetic cerebral palsy: A validity study
This study aimed to evaluate clinical utility of 2D-markerless motion analysis (2DMMA) from a single camera during a reaching-sideways-task in individuals with dyskinetic cerebral palsy (DCP) by determining (1) concurrent validity by correlating 2DMMA against marker-based 3D-motion analysis (3DMA) and (2) construct validity by assessing differences in 2DMMA features between DCP and typically developing (TD) peers. 2DMMA key points were tracked from frontal videos of a single camera by DeepLabCut and accuracy was assessed against human labelling. Shoulder, elbow and wrist angles were calculated from 2DMMA and 3DMA (as gold standard) and correlated to assess concurrent validity. Additionally, execution time and variability features such as mean point-wise standard deviation of the angular trajectories (i.e. shoulder elevation, elbow and wrist flexion/extension) and wrist trajectory deviation by mean overshoot and convex hull were calculated from key points. 2DMMA features were compared between the DCP group and TD peers to assess construct validity. Fifty-one individuals (30 DCP;21 TD; age:5–24 years) participated. An accuracy of approximately 1.5 cm was reached for key point tracking. While significant correlations were found for wrist (ρ = 0.810;p < 0.001) and elbow angles (ρ = 0.483;p < 0.001), 2DMMA shoulder angles were not correlated (ρ = 0.247;p = 0.102) to 3DMA. Wrist and elbow angles, execution time and variability features all differed between groups (Effect sizes 0.35–0.81;p < 0.05). Videos of a reaching-sideways-task processed by 2DMMA to assess upper extremity movements in DCP showed promising validity. The method is especially valuable to assess movement variability.