Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
12,497
result(s) for
"Video Recording - methods"
Sort by:
DeepLabCut: markerless pose estimation of user-defined body parts with deep learning
by
Mackenzie Weygandt Mathis
,
Bethge, Matthias
,
Abe, Taiga
in
Algorithms
,
Animal behavior
,
Artificial neural networks
2018
Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.
Journal Article
Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation
2016
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.
Journal Article
idTracker: tracking individuals in a group by automatic identification of unmarked animals
by
Arganda, Sara
,
de Polavieja, Gonzalo G
,
Vicente-Page, Julián
in
631/114/1564
,
631/114/2400
,
631/158
2014
The transformation of individual animal images acquired from videos into unique reference fingerprints allows for robust tracking of individuals in groups and reidentification of individuals between sightings and across different videos.
Animals in groups touch each other, move in paths that cross, and interact in complex ways. Current video tracking methods sometimes switch identities of unmarked individuals during these interactions. These errors propagate and result in random assignments after a few minutes unless manually corrected. We present idTracker, a multitracking algorithm that extracts a characteristic fingerprint from each animal in a video recording of a group. It then uses these fingerprints to identify every individual throughout the video. Tracking by identification prevents propagation of errors, and the correct identities can be maintained indefinitely. idTracker distinguishes animals even when humans cannot, such as for size-matched siblings, and reidentifies animals after they temporarily disappear from view or across different videos. It is robust, easy to use and general. We tested it on fish (
Danio rerio
and
Oryzias latipes
), flies (
Drosophila melanogaster
), ants (
Messor structor
) and mice (
Mus musculus
).
Journal Article
Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data
by
Giovannucci, Andrea
,
Sabatini, Bernardo L
,
Resendez, Shanna L
in
Algorithms
,
Animals
,
Brain - physiology
2018
In vivo calcium imaging through microendoscopic lenses enables imaging of previously inaccessible neuronal populations deep within the brains of freely moving animals. However, it is computationally challenging to extract single-neuronal activity from microendoscopic data, because of the very large background fluctuations and high spatial overlaps intrinsic to this recording modality. Here, we describe a new constrained matrix factorization approach to accurately separate the background and then demix and denoise the neuronal signals of interest. We compared the proposed method against previous independent components analysis and constrained nonnegative matrix factorization approaches. On both simulated and experimental data recorded from mice, our method substantially improved the quality of extracted cellular signals and detected more well-isolated neural signals, especially in noisy data regimes. These advances can in turn significantly enhance the statistical power of downstream analyses, and ultimately improve scientific conclusions derived from microendoscopic data.
Journal Article
Real-World Implementation of Video Outpatient Consultations at Macro, Meso, and Micro Levels: Mixed-Method Study
by
Bhattacharya, Satya
,
Ramoutar, Seendy
,
Wherton, Joseph
in
Cancer
,
Case studies
,
Chronic illnesses
2018
There is much interest in virtual consultations using video technology. Randomized controlled trials have shown video consultations to be acceptable, safe, and effective in selected conditions and circumstances. However, this model has rarely been mainstreamed and sustained in real-world settings.
The study sought to (1) define good practice and inform implementation of video outpatient consultations and (2) generate transferable knowledge about challenges to scaling up and routinizing this service model.
A multilevel, mixed-method study of Skype video consultations (micro level) was embedded in an organizational case study (meso level), taking account of national context and wider influences (macro level). The study followed the introduction of video outpatient consultations in three clinical services (diabetes, diabetes antenatal, and cancer surgery) in a National Health Service trust (covering three hospitals) in London, United Kingdom. Data sources included 36 national-level stakeholders (exploratory and semistructured interviews), longitudinal organizational ethnography (300 hours of observations; 24 staff interviews), 30 videotaped remote consultations, 17 audiotaped face-to-face consultations, and national and local documents. Qualitative data, analyzed using sociotechnical change theories, addressed staff and patient experience and organizational and system drivers. Quantitative data, analyzed via descriptive statistics, included uptake of video consultations by staff and patients and microcategorization of different kinds of talk (using the Roter interaction analysis system).
When clinical, technical, and practical preconditions were met, video consultations appeared safe and were popular with some patients and staff. Compared with face-to-face consultations for similar conditions, video consultations were very slightly shorter, patients did slightly more talking, and both parties sometimes needed to make explicit things that typically remained implicit in a traditional encounter. Video consultations appeared to work better when the clinician and patient already knew and trusted each other. Some clinicians used Skype adaptively to respond to patient requests for ad hoc encounters in a way that appeared to strengthen supported self-management. The reality of establishing video outpatient services in a busy and financially stretched acute hospital setting proved more complex and time-consuming than originally anticipated. By the end of this study, between 2% and 22% of consultations were being undertaken remotely by participating clinicians. In the remainder, clinicians chose not to participate, or video consultations were considered impractical, technically unachievable, or clinically inadvisable. Technical challenges were typically minor but potentially prohibitive.
Video outpatient consultations appear safe, effective, and convenient for patients in situations where participating clinicians judge them clinically appropriate, but such situations are a fraction of the overall clinic workload. As with other technological innovations, some clinicians will adopt readily, whereas others will need incentives and support. There are complex challenges to embedding video consultation services within routine practice in organizations that are hesitant to change, especially in times of austerity.
Journal Article
Single-shot stereo-polarimetric compressed ultrafast photography for light-speed observation of high-dimensional optical transients with picosecond resolution
by
Liang, Jinyang
,
Wang, Peng
,
Zhu, Liren
in
639/624/1107/510
,
639/624/400/584
,
639/766/930/2735
2020
Simultaneous and efficient ultrafast recording of multiple photon tags contributes to high-dimensional optical imaging and characterization in numerous fields. Existing high-dimensional optical imaging techniques that record space and polarization cannot detect the photon’s time of arrival owing to the limited speeds of the state-of-the-art electronic sensors. Here, we overcome this long-standing limitation by implementing stereo-polarimetric compressed ultrafast photography (SP-CUP) to record light-speed high-dimensional events in a single exposure. Synergizing compressed sensing and streak imaging with stereoscopy and polarimetry, SP-CUP enables video-recording of five photon tags (
x, y, z
: space;
t
: time of arrival; and
ψ
: angle of linear polarization) at 100 billion frames per second with a picosecond temporal resolution. We applied SP-CUP to the spatiotemporal characterization of linear polarization dynamics in early-stage plasma emission from laser-induced breakdown. This system also allowed three-dimensional ultrafast imaging of the linear polarization properties of a single ultrashort laser pulse propagating in a scattering medium.
Existing high-dimensional optical imaging techniques that record space and polarization cannot detect the photon’s time of arrival due to the limited speeds of electronic sensors. Here, the authors develop a single-shot ultrafast imaging modality to record light-speed high-dimensional events with picosecond resolution.
Journal Article
idtracker.ai: tracking all individuals in small or large collectives of unmarked animals
by
Romero-Ferrero, Francisco
,
Bergomi, Mattia G
,
Heras Francisco J H
in
Animals
,
Artificial neural networks
,
Computer programs
2019
Understanding of animal collectives is limited by the ability to track each individual. We describe an algorithm and software that extract all trajectories from video, with high identification accuracy for collectives of up to 100 individuals. idtracker.ai uses two convolutional networks: one that detects when animals touch or cross and another for animal identification. The tool is trained with a protocol that adapts to video conditions and tracking difficulty.The idtracker.ai software tracks freely moving animals in large groups of up to 100 individuals. The tool is versatile and has been applied to groups of fruit flies, zebrafish, medaka, ants and mice.
Journal Article
Recording behaviour of indoor-housed farm animals automatically using machine vision technology: A systematic review
by
Fernández, Alberto Peña
,
Siegford, Janice
,
Steibel, Juan
in
Agricultural equipment
,
Algorithms
,
Analysis
2019
Large-scale phenotyping of animal behaviour traits is time consuming and has led to increased demand for technologies that can automate these procedures. Automated tracking of animals has been successful in controlled laboratory settings, but recording from animals in large groups in highly variable farm settings presents challenges. The aim of this review is to provide a systematic overview of the advances that have occurred in automated, high throughput image detection of farm animal behavioural traits with welfare and production implications. Peer-reviewed publications written in English were reviewed systematically following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. After identification, screening, and assessment for eligibility, 108 publications met these specifications and were included for qualitative synthesis. Data collected from the papers included camera specifications, housing conditions, group size, algorithm details, procedures, and results. Most studies utilized standard digital colour video cameras for data collection, with increasing use of 3D cameras in papers published after 2013. Papers including pigs (across production stages) were the most common (n = 63). The most common behaviours recorded included activity level, area occupancy, aggression, gait scores, resource use, and posture. Our review revealed many overlaps in methods applied to analysing behaviour, and most studies started from scratch instead of building upon previous work. Training and validation sample sizes were generally small (mean±s.d. groups = 3.8±5.8) and in data collection and testing took place in relatively controlled environments. To advance our ability to automatically phenotype behaviour, future research should build upon existing knowledge and validate technology under commercial settings and publications should explicitly describe recording conditions in detail to allow studies to be reproduced.
Journal Article
Language from police body camera footage shows racial disparities in officer respect
by
Prabhakaran, Vinodkumar
,
Jurafsky, Dan
,
Griffiths, Camilla M.
in
Adult
,
Black people
,
Cameras
2017
Using footage from body-worn cameras, we analyze the respectfulness of police officer language toward white and black community members during routine traffic stops. We develop computational linguistic methods that extract levels of respect automatically from transcripts, informed by a thin-slicing study of participant ratings of officer utterances. We find that officers speak with consistently less respect toward black versus white community members, even after controlling for the race of the officer, the severity of the infraction, the location of the stop, and the outcome of the stop. Such disparities in common, everyday interactions between police and the communities they serve have important implications for procedural justice and the building of police–community trust.
Journal Article
Calibration Techniques for Accurate Measurements by Underwater Camera Systems
2015
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.
Journal Article