Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
7 result(s) for "Graule, Moritz A."
Sort by:
Shipboard design and fabrication of custom 3D-printed soft robotic manipulators for the investigation of delicate deep-sea organisms
Soft robotics is an emerging technology that has shown considerable promise in deep-sea marine biological applications. It is particularly useful in facilitating delicate interactions with fragile marine organisms. This study describes the shipboard design, 3D printing and integration of custom soft robotic manipulators for investigating and interacting with deep-sea organisms. Soft robotics manipulators were tested down to 2224m via a Remotely-Operated Vehicle (ROV) in the Phoenix Islands Protected Area (PIPA) and facilitated the study of a diverse suite of soft-bodied and fragile marine life. Instantaneous feedback from the ROV pilots and biologists allowed for rapid re-design, such as adding \"fingernails\", and re-fabrication of soft manipulators at sea. These were then used to successfully grasp fragile deep-sea animals, such as goniasterids and holothurians, which have historically been difficult to collect undamaged via rigid mechanical arms and suction samplers. As scientific expeditions to remote parts of the world are costly and lengthy to plan, on-the-fly soft robot actuator printing offers a real-time solution to better understand and interact with delicate deep-sea environments, soft-bodied, brittle, and otherwise fragile organisms. This also offers a less invasive means of interacting with slow-growing deep marine organisms, some of which can be up to 18,000 years old.
Ultra-sensitive and resilient compliant strain gauges for soft machines
Soft machines are a promising design paradigm for human-centric devices 1 , 2 and systems required to interact gently with their environment 3 , 4 . To enable soft machines to respond intelligently to their surroundings, compliant sensory feedback mechanisms are needed. Specifically, soft alternatives to strain gauges—with high resolution at low strain (less than 5 per cent)—could unlock promising new capabilities in soft systems. However, currently available sensing mechanisms typically possess either high strain sensitivity or high mechanical resilience, but not both. The scarcity of resilient and compliant ultra-sensitive sensing mechanisms has confined their operation to laboratory settings, inhibiting their widespread deployment. Here we present a versatile and compliant transduction mechanism for high-sensitivity strain detection with high mechanical resilience, based on strain-mediated contact in anisotropically resistive structures (SCARS). The mechanism relies upon changes in Ohmic contact between stiff, micro-structured, anisotropically conductive meanders encapsulated by stretchable films. The mechanism achieves high sensitivity, with gauge factors greater than 85,000, while being adaptable for use with high-strength conductors, thus producing sensors resilient to adverse loading conditions. The sensing mechanism also exhibits high linearity, as well as insensitivity to bending and twisting deformations—features that are important for soft device applications. To demonstrate the potential impact of our technology, we construct a sensor-integrated, lightweight, textile-based arm sleeve that can recognize gestures without encumbering the hand. We demonstrate predictive tracking and classification of discrete gestures and continuous hand motions via detection of small muscle movements in the arm. The sleeve demonstration shows the potential of the SCARS technology for the development of unobtrusive, wearable biomechanical feedback systems and human–computer interfaces. Strain gauges with both high sensitivity and high mechanical resilience, based on strain-mediated contact in anisotropically resistive structures, are demonstrated within a sensor-integrated, textile-based sleeve that can recognize human hand motions via muscle deformations.
GG-LLM: Geometrically Grounding Large Language Models for Zero-shot Human Activity Forecasting in Human-Aware Task Planning
A robot in a human-centric environment needs to account for the human's intent and future motion in its task and motion planning to ensure safe and effective operation. This requires symbolic reasoning about probable future actions and the ability to tie these actions to specific locations in the physical environment. While one can train behavioral models capable of predicting human motion from past activities, this approach requires large amounts of data to achieve acceptable long-horizon predictions. More importantly, the resulting models are constrained to specific data formats and modalities. Moreover, connecting predictions from such models to the environment at hand to ensure the applicability of these predictions is an unsolved problem. We present a system that utilizes a Large Language Model (LLM) to infer a human's next actions from a range of modalities without fine-tuning. A novel aspect of our system that is critical to robotics applications is that it links the predicted actions to specific locations in a semantic map of the environment. Our method leverages the fact that LLMs, trained on a vast corpus of text describing typical human behaviors, encode substantial world knowledge, including probable sequences of human actions and activities. We demonstrate how these localized activity predictions can be incorporated in a human-aware task planner for an assistive robot to reduce the occurrences of undesirable human-robot interactions by 29.2% on average.
Incorporating Interpretable Output Constraints in Bayesian Neural Networks
Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.
Output-Constrained Bayesian Neural Networks
Bayesian neural network (BNN) priors are defined in parameter space, making it hard to encode prior knowledge expressed in function space. We formulate a prior that incorporates functional constraints about what the output can or cannot be in regions of the input space. Output-Constrained BNNs (OC-BNN) represent an interpretable approach of enforcing a range of constraints, fully consistent with the Bayesian framework and amenable to black-box inference. We demonstrate how OC-BNNs improve model robustness and prevent the prediction of infeasible outputs in two real-world applications of healthcare and robotics.
Modeling, Planning, and Learning for Soft Robots in Human-Centric, Contact-Rich Environments
Robots are increasingly moving from constrained, structured settings (e.g., assembly lines and warehouses) to less controlled environments (e.g., construction sites, hospitals, or our homes), where they can enhance human capabilities and assist in patient care or activities of daily living. In these unstructured settings, robots are required to share their workspace with — and understand — human collaborators, operate reliably under uncertainty, and impart precisely controlled forces on fragile objects without damaging them. Soft robots, in contrast to their rigid counterparts, can gently interact with the world despite failures or planning inaccuracies via passive compliance in their materials and/or structures. However, their inherent compliance, in combination with the fact that they commonly undergo non-linear and high-dimensional deformations, can make it hard to design and control them, so far hindering their widespread use in human-robot collaborative tasks. From using machine learning to infer human gestures from garment-integrated sensors, to the development of new computational tools for manufacturing, planning, and reinforcement learning for soft robots in contact-rich environments, this thesis explores how rigorous modeling and computational tools can advance the capabilities of soft robots to enable their effective and uninterrupted cooperation with humans.We first present the development of a sensorized sleeve and demonstrate the ability to detect hand gestures without encumbering the operator’s hand, highlighting the sleeve’s utility as a seamless human-robot interface. After this foray into sensing, the remainder of this thesis discusses various approaches to improve the capabilities of soft robot actuators. We present two computational tools to facilitate the design of soft robots and their controllers at two different levels of abstraction: one suitable to accelerate and automate design iterations under consideration of detailed material deformation and manufacturing requirements; and one suitable for the exploration of system-level design choices in simulation. We demonstrate the utility of both of these tools through a number of design studies on soft robot hands and continuum arms. Driven by the need to generate contact-rich trajectories for these systems as they complete in-hand manipulation tasks or navigate clutter, we then introduce a novel framework for path planning that explicitly accounts for the effect of contact forces along the full length of tentacle-like soft manipulators. Finally, we present a benchmark and training paradigm that facilitate the development of high-level controllers for soft robots using reinforcement learning, and show how these tools enable soft robots to learn a diverse set of skills ranging from locomotion to in-hand manipulation. Altogether, this thesis presents wearable sensors that enable soft robots to understand an operator’s intent, and extends the capabilities of soft robots to reason about and reliably execute a complex series of actions in order to assist the operator in meeting their goals.
FineControlNet: Fine-level Text Control for Image Generation with Spatially Aligned Text Control Injection
Recently introduced ControlNet has the ability to steer the text-driven image generation process with geometric input such as human 2D pose, or edge features. While ControlNet provides control over the geometric form of the instances in the generated image, it lacks the capability to dictate the visual appearance of each instance. We present FineControlNet to provide fine control over each instance's appearance while maintaining the precise pose control capability. Specifically, we develop and demonstrate FineControlNet with geometric control via human pose images and appearance control via instance-level text prompts. The spatial alignment of instance-specific text prompts and 2D poses in latent space enables the fine control capabilities of FineControlNet. We evaluate the performance of FineControlNet with rigorous comparison against state-of-the-art pose-conditioned text-to-image diffusion models. FineControlNet achieves superior performance in generating images that follow the user-provided instance-specific text prompts and poses compared with existing methods. Project webpage: https://samsunglabs.github.io/FineControlNet-project-page