Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
18
result(s) for
"Pfrommer, Bernd"
Sort by:
Transmission-Based Vertebrae Strength Probe Development: Far Field Probe Property Extraction and Integrated Machine Vision Distance Validation Experiments
2023
We are developing a transmission-based probe for point-of-care assessment of vertebrae strength needed for fabricating the instrumentation used in supporting the spinal column during spinal fusion surgery. The device is based on a transmission probe whereby thin coaxial probes are inserted into the small canals through the pedicles and into the vertebrae, and a broad band signal is transmitted from one probe to the other across the bone tissue. Simultaneously, a machine vision scheme has been developed to measure the separation distance between the probe tips while they are inserted into the vertebrae. The latter technique includes a small camera mounted to the handle of one probe and associated fiducials printed on the other. Machine vision techniques make it possible to track the location of the fiducial-based probe tip and compare it to the fixed coordinate location of the camera-based probe tip. The combination of the two methods allows for straightforward calculation of tissue characteristics by exploiting the antenna far field approximation. Validation tests of the two concepts are presented as a precursor to clinical prototype development.
Journal Article
Multi-view Tracking, Re-ID, and Social Network Analysis of a Flock of Visually Similar Birds in an Outdoor Aviary
2023
The ability to capture detailed interactions among individuals in a social group is foundational to our study of animal behavior and neuroscience. Recent advances in deep learning and computer vision are driving rapid progress in methods that can record the actions and interactions of multiple individuals simultaneously. Many social species, such as birds, however, live deeply embedded in a three-dimensional world. This world introduces additional perceptual challenges such as occlusions, orientation-dependent appearance, large variation in apparent size, and poor sensor coverage for 3D reconstruction, that are not encountered by applications studying animals that move and interact only on 2D planes. Here we introduce a system for studying the behavioral dynamics of a group of songbirds as they move throughout a 3D aviary. We study the complexities that arise when tracking a group of closely interacting animals in three dimensions and introduce a novel dataset for evaluating multi-view trackers. Finally, we analyze captured ethogram data and demonstrate that social context affects the distribution of sequential interactions between birds in the aviary.
Journal Article
Structure, Bonding, and Geochemistry of Xenon at High Pressures
by
Louie, Steven G.
,
Caldwell, Wendel A.
,
Nguyen, Jeffrey H.
in
Chemical bonds
,
Chemistry
,
Climate
1997
Although xenon becomes metallic at pressures above about 100 gigapascals, a combination of quantum mechanical calculations and high pressure-temperature experiments reveals no tendency on the part of xenon to form a metal alloy with iron or platinum to at least 100 to 150 gigapascals. The transformation of xenon from face-centered cubic (fcc) to hexagonal close-packed (hcp) structures is kinetically hindered, the differences in volume and bulk modulus between the two phases being smaller than we can resolve (less than 0.3 percent and 0.6 gigapascals, respectively). The equilibrium fcc-hcp phase boundary is at 21 (±3) gigapascals, which is a lower pressure than was previously thought, and it is unlikely that Earth's core serves as a reservoir for primordial xenon.
Journal Article
Variation in female songbird state determines signal strength needed to evoke copulation
It is the female response to male signals that determines courtship success. In most songbirds, females control reproduction via the copulation solicitation display (CSD), an innate, stereotyped posture produced in direct response to male displays. Because CSD can be elicited in the absence of males by the presentation of recorded song, CSD production enables investigations into the effects of underlying signal features and behavioral state on female mating preferences. Using computer vision to quantify CSD trajectory in female brown-headed cowbirds (Molothrus ater), we show that both song quality and a female’s internal state predict CSD production, as well as the onset latency and duration of the display. We also show that CSD can be produced in a graded fashion based on both signal strength and internal state. These results emphasize the importance of underlying receiver state in determining behavioral responses and suggest that female responsiveness acts in conjunction with male signal strength to determine the efficacy of male courtship.
Frequency Cam: Imaging Periodic Signals in Real-Time
2022
Due to their high temporal resolution and large dynamic range event cameras are uniquely suited for the analysis of time-periodic signals in an image. In this work we present an efficient and fully asynchronous event camera algorithm for detecting the fundamental frequency at which image pixels flicker. The algorithm employs a second-order digital infinite impulse response (IIR) filter to perform an approximate per-pixel brightness reconstruction and is more robust to high-frequency noise than the baseline method we compare to. We further demonstrate that using the falling edge of the signal leads to more accurate period estimates than the rising edge, and that for certain signals interpolating the zero-level crossings can further increase accuracy. Our experiments find that the outstanding capabilities of the camera in detecting frequencies up to 64kHz for a single pixel do not carry over to full sensor imaging as readout bandwidth limitations become a serious obstacle. This suggests that a hardware implementation closer to the sensor will allow for greatly improved frequency imaging. We discuss the important design parameters for fullsensor frequency imaging and present Frequency Cam, an open-source implementation as a ROS node that can run on a single core of a laptop CPU at more than 50 million events per second. It produces results that are qualitatively very similar to those obtained from the closed source vibration analysis module in Prophesee's Metavision Toolkit. The code for Frequency Cam and a demonstration video can be found at https://github.com/berndpfrommer/frequency_cam
TagSLAM: Robust SLAM with Fiducial Markers
2019
TagSLAM provides a convenient, flexible, and robust way of performing Simultaneous Localization and Mapping (SLAM) with AprilTag fiducial markers. By leveraging a few simple abstractions (bodies, tags, cameras), TagSLAM provides a front end to the GTSAM factor graph optimizer that makes it possible to rapidly design a range of experiments that are based on tags: full SLAM, extrinsic camera calibration with non-overlapping views, visual localization for ground truth, loop closure for odometry, pose estimation etc. We discuss in detail how TagSLAM initializes the factor graph in a robust way, and present loop closure as an application example. TagSLAM is a ROS based open source package and can be found at https://berndpfrommer.github.io/tagslam_web.
Simultaneous Localization and Layout Model Selection in Manhattan Worlds
2018
In this paper, we will demonstrate how Manhattan structure can be exploited to transform the Simultaneous Localization and Mapping (SLAM) problem, which is typically solved by a nonlinear optimization over feature positions, into a model selection problem solved by a convex optimization over higher order layout structures, namely walls, floors, and ceilings. Furthermore, we show how our novel formulation leads to an optimization procedure that automatically performs data association and loop closure and which ultimately produces the simplest model of the environment that is consistent with the available measurements. We verify our method on real world data sets collected with various sensing modalities.
3D Bird Reconstruction: a Dataset, Model, and Shape Recovery from a Single View
2020
Automated capture of animal pose is transforming how we study neuroscience and social behavior. Movements carry important social cues, but current methods are not able to robustly estimate pose and shape of animals, particularly for social animals such as birds, which are often occluded by each other and objects in the environment. To address this problem, we first introduce a model and multi-view optimization approach, which we use to capture the unique shape and pose space displayed by live birds. We then introduce a pipeline and experiments for keypoint, mask, pose, and shape regression that recovers accurate avian postures from single views. Finally, we provide extensive multi-view keypoint and mask annotations collected from a group of 15 social birds housed together in an outdoor aviary. The project website with videos, results, code, mesh model, and the Penn Aviary Dataset can be found at https://marcbadger.github.io/avian-mesh.
Predictive and Semantic Layout Estimation for Robotic Applications in Manhattan Worlds
2018
This paper describes an approach to automatically extracting floor plans from the kinds of incomplete measurements that could be acquired by an autonomous mobile robot. The approach proceeds by reasoning about extended structural layout surfaces which are automatically extracted from the available data. The scheme can be run in an online manner to build water tight representations of the environment. The system effectively speculates about room boundaries and free space regions which provides useful guidance to subsequent motion planning systems. Experimental results are presented on multiple data sets.
The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception
2018
Event based cameras are a new passive sensing modality with a number of benefits over traditional cameras, including extremely low latency, asynchronous data acquisition, high dynamic range and very low power consumption. There has been a lot of recent interest and development in applying algorithms to use the events to perform a variety of 3D perception tasks, such as feature tracking, visual odometry, and stereo depth estimation. However, there currently lacks the wealth of labeled data that exists for traditional cameras to be used for both testing and development. In this paper, we present a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car and mounted on a motorcycle, in a variety of different illumination levels and environments. From each camera, we provide the event stream, grayscale images and IMU readings. In addition, we utilize a combination of IMU, a rigidly mounted lidar system, indoor and outdoor motion capture and GPS to provide accurate pose and depth images for each camera at up to 100Hz. For comparison, we also provide synchronized grayscale images and IMU readings from a frame based stereo camera system.