Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
402
result(s) for
"Ray, Shantanu"
Sort by:
Large-scale capture of hidden fluorescent labels for training generalizable markerless motion capture models
2023
Deep learning-based markerless tracking has revolutionized studies of animal behavior. Yet the generalizability of trained models tends to be limited, as new training data typically needs to be generated manually for each setup or visual environment. With each model trained from scratch, researchers track distinct landmarks and analyze the resulting kinematic data in idiosyncratic ways. Moreover, due to inherent limitations in manual annotation, only a sparse set of landmarks are typically labeled. To address these issues, we developed an approach, which we term GlowTrack, for generating orders of magnitude more training data, enabling models that generalize across experimental contexts. We describe: a) a high-throughput approach for producing hidden labels using fluorescent markers; b) a multi-camera, multi-light setup for simulating diverse visual conditions; and c) a technique for labeling many landmarks in parallel, enabling dense tracking. These advances lay a foundation for standardized behavioral pipelines and more complete scrutiny of movement.
Deep learning-based models for tracking behavior are often constrained by manual annotation. Here, authors present GlowTrack, an approach using fluorescence to generate large and diverse training sets that improve model robustness and tracking coverage.
Journal Article
Impact of correcting visual impairment and low vision in deaf-mute students in Pune, India
2016
Aim: The aim of this study was to evaluate visual acuity and vision function before and after providing spectacles and low vision devices (LVDs) in deaf-mute students. Settings: Schools for deaf-mute in West Maharashtra. Methods: Hearing-impaired children in all special schools in Pune district underwent detailed visual acuity testing (with teachers' help), refraction, external ocular examination, and fundoscopy. Students with refractive errors and low vision were provided with spectacles and LVD. The LV Prasad-Functional Vision Questionnaire consisting of twenty items was administered to each subject before and after providing spectacles, LVDs. Statistical Analysis: Wilcoxon matched-pairs signed-ranks test. Results: 252/929 (27.1%) students had a refractive error. 794 (85.5%) were profound deaf. Two-hundred and fifty students were dispensed spectacles and LVDs. Mean LogMAR visual acuity before introduction of spectacles and LVDs were 0.33 ± 0.36 which improved to 0.058 (P < 0.0001) after intervention. It was found that difference in functional vision pre- and post-intervention was statistically significant (P < 0.0001) for questions 1-19. The most commonly reported difficulties were for performing distance task like reading the bus destination (58.7%), making out the bus number (51.1%), copying from blackboard (47.7%), and seeing whether somebody is waving hand from across the road (45.5%). In response to question number 20, 57.4% of students felt that their vision was much worse than their friend's vision, which was reduced to 17.6% after dispensing spectacles and LVDs. Conclusion: Spectacle and LVD reduced visual impairment and improved vision function in deaf-mute students, augmenting their ability to negotiate in and out of school.
Journal Article
Large-scale capture of hidden fluorescent labels for training generalizable markerless motion capture models
2022
Recent advances in deep learning-based markerless pose estimation have dramatically improved the scale and ease with which body landmarks can be tracked in studies of animal behavior. However, pose estimation for animals in a laboratory setting still faces some specific challenges. Researchers typically need to manually generate new training data for each experimental setup and visual environment, limiting the generalizability of this approach. With each network being trained from scratch, different investigators track distinct anatomical landmarks and analyze the resulting kinematic data in idiosyncratic ways. Moreover, much of the movement data is discarded: only a few sparse landmarks are typically labeled, due to the inherent scale and accuracy limits of manual annotation. To address these issues, we developed an approach, which we term GlowTrack, for generating large training datasets that overcome the relatively modest limits of manual labeling, enabling deep learning models that generalize across experimental contexts. The key innovations are: a) an automated, high-throughput approach for generating hidden labels free of human error using fluorescent markers; b) a multi-camera, multi-light setup for generating large amounts of training data under diverse visual conditions; and c) a technique for massively parallel tracking of hundreds of landmarks simultaneously using computer vision feature matching algorithms, providing dense coverage for kinematic analysis at a resolution not currently available. These advances yield versatile deep learning models that are trained at scale, laying the foundation for standardized behavioral pipelines and more complete scrutiny of animal movements.