Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
4 result(s) for "Multimodal user interfaces (Computer systems) Design and construction."
Sort by:
Haptic interface with multimodal tactile sensing and feedback for human–robot interaction
Novel sensing and actuation technologies have notably advanced haptic interfaces, paving the way for more immersive user experiences. We introduce a haptic system that transcends traditional pressure-based interfaces by delivering more comprehensive tactile sensations. This system provides an interactive combination of a robotic hand and haptic glove to operate devices within the wireless communication range. Each component is equipped with independent sensors and actuators, enabling real-time mirroring of user’s hand movements and the effective transmission of tactile information. Remarkably, the proposed system has a multimodal feedback mechanism based on both vibration motors and Peltier elements. This mechanism ensures a varied tactile experience encompassing pressure and temperature sensations. The accuracy of tactile feedback is meticulously calibrated according to experimental data, thereby enhancing the reliability of the system and user experience. The Peltier element for temperature feedback allows users to safely experience temperatures similar to those detected by the robotic hand. Potential applications of this system are wide ranging and include operations in hazardous environments and medical interventions. By providing realistic tactile sensations, our haptic system aims to improve both the performance and safety of workers in such critical sectors, thereby highlighting the great potential of advanced haptic technologies.
HortiVQA-PP: Multitask Framework for Pest Segmentation and Visual Question Answering in Horticulture
A multimodal interactive system, HortiVQA-PP, is proposed for horticultural scenarios, with the aim of achieving precise identification of pests and their natural predators, modeling ecological co-occurrence relationships, and providing intelligent question-answering services tailored to agricultural users. The system integrates three core modules: semantic segmentation, pest–predator co-occurrence detection, and knowledge-enhanced visual question answering. A multimodal dataset comprising 30 pest categories and 10 predator categories has been constructed, encompassing annotated images and corresponding question–answer pairs. In the semantic segmentation task, HortiVQA-PP outperformed existing models across all five evaluation metrics, achieving a precision of 89.6%, recall of 85.2%, F1-score of 87.3%, mAP@50 of 82.4%, and IoU of 75.1%, representing an average improvement of approximately 4.1% over the Segment Anything model. For the pest–predator co-occurrence matching task, the model attained a multi-label accuracy of 83.5%, a reduced Hamming Loss of 0.063, and a macro-F1 score of 79.4%, significantly surpassing methods such as ASL and ML-GCN, thereby demonstrating robust structural modeling capability. In the visual question answering task, the incorporation of a horticulture-specific knowledge graph enhanced the model’s reasoning ability. The system achieved 48.7% in BLEU-4, 54.8% in ROUGE-L, 43.3% in METEOR, 36.9% in exact match (EM), and a GPT expert score of 4.5, outperforming mainstream models including BLIP-2, Flamingo, and MiniGPT-4 across all metrics. Experimental results indicate that HortiVQA-PP exhibits strong recognition and interaction capabilities in complex pest scenarios, offering a high-precision, interpretable, and widely applicable artificial intelligence solution for digital horticulture.
A deep learning based multimodal interaction system for bed ridden and immobile hospital admitted patients: design, development and evaluation
Background Hospital cabins are a part and parcel of the healthcare system. Most patients admitted in hospital cabins reside in bedridden and immobile conditions. Though different kinds of systems exist to aid such patients, most of them focus on specific tasks like calling for emergencies, monitoring patient health, etc. while the patients’ limitations are ignored. Though some patient interaction systems have been developed, only singular options like touch, hand gesture or voice based interaction were provided which may not be usable for bedridden and immobile patients. Methods At first, we reviewed the existing literature to explore the prevailing healthcare and interaction systems developed for bedridden and immobile patients. Then, a requirements elicitation study was conducted through semi-structured interviews. Afterwards, design goals were established to address the requirements. Based on these goals and by using computer vision and deep learning technologies, a hospital cabin control system having multimodal interactions facility was designed and developed for hospital admitted, bedridden and immobile patients. Finally, the system was evaluated through an experiment replicated with 12 hospital admitted patients to measure its effectiveness, usability and efficiency. Results As outcomes, firstly, a set of user-requirements were identified for hospital admitted patients and healthcare practitioners. Secondly, a hospital cabin control system was designed and developed that supports multimodal interactions for bedridden and immobile hospital admitted patients which includes (a) Hand gesture based interaction for moving a cursor with hand and showing hand gesture for clicking, (b) Nose teeth based interaction where nose is used for moving a cursor and teeth is used for clicking and (c) Voice based interaction for executing tasks using specific voice commands. Finally, the evaluation results showed that the system is efficient, effective and usable to the focused users with 100% success rate, reasonable number of attempts and task completion time. Conclusion In the resultant system, Deep Learning has been incorporated to facilitate multimodal interaction for enhancing accessibility. Thus, the developed system along with its evaluation results and the identified requirements provides a promising solution for the prevailing crisis in the healthcare sector. Trial Registration Not Applicable.