Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
280
result(s) for
"computer-assisted collaboration engineering"
Sort by:
Facilitator-in-a-Box: Process Support Applications to Help Practitioners Realize the Potential of Collaboration Technology
by
Lukosch, Stephan
,
Albrecht, Conan C.
,
de Vreede, Gert-Jan
in
Collaboration
,
collaboration engineering
,
collaboration support system
2013
The potential benefits of collaboration technologies are typically realized only in groups led by collaboration experts. This raises the facilitator-in-the-box challenge: Can collaboration expertise be packaged with collaboration technology in a form that nonexperts can reuse with no training on either tools or techniques? We address that challenge with process support applications (PSAs). We describe a collaboration support system (CSS) that combines a computer-assisted collaboration engineering platform for creating PSAs with a process support system runtime platform for executing PSAs. We show that the CSS meets its design goals: (1) to reduce development cycles for collaboration systems, (2) to allow nonprogrammers to design and develop PSAs, and (3) to package enough expertise in the tools that nonexperts could execute a well-designed collaborative work process without training.
Journal Article
Toward a new generation of smart skins
2019
Rapid advances in soft electronics, microfabrication technologies, miniaturization and electronic skins are facilitating the development of wearable sensor devices that are highly conformable and intimately associated with human skin. These devices—referred to as ‘smart skins’—offer new opportunities in the research study of human biology, in physiological tracking for fitness and wellness applications, and in the examination and treatment of medical conditions. Over the past 12 months, electronic skins have been developed that are self-healing, intrinsically stretchable, designed into an artificial afferent nerve, and even self-powered. Greater collaboration between engineers, biologists, informaticians and clinicians will be required for smart skins to realize their full potential and attain wide adoption in a diverse range of real-world settings.
Journal Article
Identifying Triple-Negative Breast Cancer Using Background Parenchymal Enhancement Heterogeneity on Dynamic Contrast-Enhanced MRI: A Pilot Radiomics Study
2015
To determine the added discriminative value of detailed quantitative characterization of background parenchymal enhancement in addition to the tumor itself on dynamic contrast-enhanced (DCE) MRI at 3.0 Tesla in identifying \"triple-negative\" breast cancers.
In this Institutional Review Board-approved retrospective study, DCE-MRI of 84 women presenting 88 invasive carcinomas were evaluated by a radiologist and analyzed using quantitative computer-aided techniques. Each tumor and its surrounding parenchyma were segmented semi-automatically in 3-D. A total of 85 imaging features were extracted from the two regions, including morphologic, densitometric, and statistical texture measures of enhancement. A small subset of optimal features was selected using an efficient sequential forward floating search algorithm. To distinguish triple-negative cancers from other subtypes, we built predictive models based on support vector machines. Their classification performance was assessed with the area under receiver operating characteristic curve (AUC) using cross-validation.
Imaging features based on the tumor region achieved an AUC of 0.782 in differentiating triple-negative cancers from others, in line with the current state of the art. When background parenchymal enhancement features were included, the AUC increased significantly to 0.878 (p<0.01). Similar improvements were seen in nearly all subtype classification tasks undertaken. Notably, amongst the most discriminating features for predicting triple-negative cancers were textures of background parenchymal enhancement.
Considering the tumor as well as its surrounding parenchyma on DCE-MRI for radiomic image phenotyping provides useful information for identifying triple-negative breast cancers. Heterogeneity of background parenchymal enhancement, characterized by quantitative texture features on DCE-MRI, adds value to such differentiation models as they are strongly associated with the triple-negative subtype. Prospective validation studies are warranted to confirm these findings and determine potential implications.
Journal Article
Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions
by
Shafiee, Mahmood
,
Seetohul, Jenna
,
Sirlantzis, Konstantinos
in
Algorithms
,
Augmented Reality
,
augmented reality (AR)
2023
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future.
Journal Article
A virtual reality tool for training in global engineering collaboration
by
Ching-Mei Tseng
,
Wu, Feng
,
Wu, Tzong-Hann
in
Building design
,
Building management systems
,
Classrooms
2019
Global collaboration is the major trend in the architecture, engineering, and construction industry; training in global engineering collaboration thus is highly demanded in existing engineering education. One approach to this training is to expose students to sufficient experiences, such as having them participating in a global project-based course. To do this, the authors participated in and co-designed a global project-based course, called Sky Classroom, from 2014 to 2016. That course, which aimed to teach global engineering collaboration skills, required international students to collaborate in the design of buildings. During the course, we identified three issues in the existing communication platform: low communicability, passive problem finding, and poor spatial cognition. Since the communication platform is the key factor in successful collaboration, we designed and implemented an appropriate platform, the virtual building information modeling (BIM) reviewer (VBR), for addressing these issues. VBR is an avatar-based communication platform that allows users to enter the BIM model and find problems from their individual perspectives. It was developed and continuously improved based on observations of students’ global collaboration behaviors and feedback in Sky Classroom. VBR has undergone two development phases with two virtual reality types, desktop-based and immersive. While the desktop-based VBR solves the issues of low communicability and passive problem finding, the immersive VBR solves the issue of poor spatial cognition, and the application of the VBR in Sky Classroom will solve the issues in the existing communication platform and assist students in collaboration, respectively.
Journal Article
Designing a course model for distance-based online bioinformatics training in Africa: The H3ABioNet experience
2017
Africa is not unique in its need for basic bioinformatics training for individuals from a diverse range of academic backgrounds. However, particular logistical challenges in Africa, most notably access to bioinformatics expertise and internet stability, must be addressed in order to meet this need on the continent. H3ABioNet (www.h3abionet.org), the Pan African Bioinformatics Network for H3Africa, has therefore developed an innovative, free-of-charge \"Introduction to Bioinformatics\" course, taking these challenges into account as part of its educational efforts to provide on-site training and develop local expertise inside its network. A multiple-delivery-mode learning model was selected for this 3-month course in order to increase access to (mostly) African, expert bioinformatics trainers. The content of the course was developed to include a range of fundamental bioinformatics topics at the introductory level. For the first iteration of the course (2016), classrooms with a total of 364 enrolled participants were hosted at 20 institutions across 10 African countries. To ensure that classroom success did not depend on stable internet, trainers pre-recorded their lectures, and classrooms downloaded and watched these locally during biweekly contact sessions. The trainers were available via video conferencing to take questions during contact sessions, as well as via online \"question and discussion\" forums outside of contact session time. This learning model, developed for a resource-limited setting, could easily be adapted to other settings.
Journal Article
Individual dairy cow identification based on lightweight convolutional neural network
2021
In actual farms, individual livestock identification technology relies on large models with slow recognition speeds, which seriously restricts its practical application. In this study, we use deep learning to recognize the features of individual cows. Alexnet is used as a skeleton network for a lightweight convolutional neural network that can recognise individual cows in images with complex backgrounds. The model is improved for multiple multiscale convolutions of Alexnet using the short-circuit connected BasicBlock to fit the desired values and avoid gradient disappearance or explosion. An improved inception module and attention mechanism are added to extract features at multiple scales to enhance the detection of feature points. In experiments, side-view images of 13 cows were collected. The proposed method achieved 97.95% accuracy in cow identification with a single training time of only 6 s, which is one-sixth that of the original Alexnet. To verify the validity of the model, the dataset and experimental parameters were kept constant and compared with the results of Vgg16, Resnet50, Mobilnet V2 and GoogLenet. The proposed model ensured high accuracy while having the smallest parameter size of 6.51 MB, which is 1.3 times less than that of the Mobilnet V2 network, which is famous for its light weight. This method overcomes the defects of traditional methods, which require artificial extraction of features, are often not robust enough, have slow recognition speeds, and require large numbers of parameters in the recognition model. The proposed method works with images with complex backgrounds, making it suitable for actual farming environments. It also provides a reference for the identification of individual cows in images with complex backgrounds.
Journal Article
Prototype Learning for Medical Time Series Classification via Human–Machine Collaboration
2024
Deep neural networks must address the dual challenge of delivering high-accuracy predictions and providing user-friendly explanations. While deep models are widely used in the field of time series modeling, deciphering the core principles that govern the models’ outputs remains a significant challenge. This is crucial for fostering the development of trusted models and facilitating domain expert validation, thereby empowering users and domain experts to utilize them confidently in high-risk decision-making contexts (e.g., decision-support systems in healthcare). In this work, we put forward a deep prototype learning model that supports interpretable and manipulable modeling and classification of medical time series (i.e., ECG signal). Specifically, we first optimize the representation of single heartbeat data by employing a bidirectional long short-term memory and attention mechanism, and then construct prototypes during the training phase. The final classification outcomes (i.e., normal sinus rhythm, atrial fibrillation, and other rhythm) are determined by comparing the input with the obtained prototypes. Moreover, the proposed model presents a human–machine collaboration mechanism, allowing domain experts to refine the prototypes by integrating their expertise to further enhance the model’s performance (contrary to the human-in-the-loop paradigm, where humans primarily act as supervisors or correctors, intervening when required, our approach focuses on a human–machine collaboration, wherein both parties engage as partners, enabling more fluid and integrated interactions). The experimental outcomes presented herein delineate that, within the realm of binary classification tasks—specifically distinguishing between normal sinus rhythm and atrial fibrillation—our proposed model, albeit registering marginally lower performance in comparison to certain established baseline models such as Convolutional Neural Networks (CNNs) and bidirectional long short-term memory with attention mechanisms (Bi-LSTMAttns), evidently surpasses other contemporary state-of-the-art prototype baseline models. Moreover, it demonstrates significantly enhanced performance relative to these prototype baseline models in the context of triple classification tasks, which encompass normal sinus rhythm, atrial fibrillation, and other rhythm classifications. The proposed model manifests a commendable prediction accuracy of 0.8414, coupled with macro precision, recall, and F1-score metrics of 0.8449, 0.8224, and 0.8235, respectively, achieving both high classification accuracy as well as good interpretability.
Journal Article
Innovative practice of sustainable digital signal processing education
by
Bai, Yongqiang
,
Xie, Zhibo
,
Wang, Yuer
in
Academic achievement
,
Alternative energy sources
,
Biology and Life Sciences
2026
The Digital Signal Processing (DSP) course serves as a core curriculum for electronic information engineering majors. Traditional DSP instruction predominantly employs a teacher-centered lecture format supplemented by simulation experiments, which fails to engage student interest or foster proactive learning, ultimately resulting in suboptimal educational outcomes. This study integrates the BOPPPS instructional framework with Collaborative Inquiry-Based Learning (CIBL) models while designing simulation-hardware experiments that seamlessly bridge virtual and hardware-based environments. By incorporating Sustainable Development Goals (SDGs)-oriented engineering case studies into Project-Based Learning (PBL) content, this approach effectively stimulates student motivation, cultivates practical skills, and enhances problem-solving capabilities. The effectiveness of these pedagogical innovations is evaluated through academic performance analysis, student surveys, and focused interviews, enabling continuous optimization of teaching plans and content. Research findings demonstrate that integrating SDG-related engineering projects significantly boosts student motivation and professional confidence. Furthermore, the BOPPPS model combined with CIBL effectively improves learning efficiency and specialized competencies among students.
Journal Article
Dual-Manifold Contrastive Learning for Robust and Real-Time EEG Motor Decoding
2026
Brain–computer interfaces (BCIs) have great potential for consumer electronics, as they enable the decoding of brain activity to control external devices and assist human–computer interaction. However, current decoding methods for BCIs face several challenges, such as low accuracy, poor stability under electrode shift, and slow processing for real-time use. In this paper, we propose a hybrid decoding framework designed to address the challenges of current EEG decoding methods. Our method combines manifold learning with contrastive learning. The core of our method lies in a dual-manifold model that uses non-negative matrix factorization (NMF) and a contrastive manifold learning framework to extract clear and useful features from brain signals. To improve decoding stability, we introduce a joint training strategy that enhances feature learning. Furthermore, the system is optimized for real-time interaction, reducing the system latency to 100 ms. We collect EEG signals from 15 subjects performing motor execution tasks and 10 subjects performing motor imagery tasks to construct a motor EEG dataset. On this dataset, the proposed method achieves superior decoding performance, reaching F1-scores of 0.7382 for the motor imagery tasks and 0.8361 for the motor execution tasks. Furthermore, the method maintains robustness even with reduced electrode counts and altered spatial distributions, highlighting its potential as a decoding solution for reliable and portable BCI systems.
Journal Article