Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
139 result(s) for "Symeonidis, L"
Sort by:
Defining a Domain-Specific Language for Behavior Verification of Cyber–Physical Applications
A common problem in the development of Internet-of-Things (IoT) and cyber–physical system (CPS) applications is the complexity of these domains, due to their hybrid and distributed nature at multiple layers (hardware, network, communication, frameworks, etc.). This complexity often leads to implementation errors, some of which result in undesired states of the application and/or the system. The current work focuses on low-code development of behavior verification processes for IoT and CPS applications, in order to raise productivity, minimize risks (due to errors) and enable access to a wider range of end-users to create and verify applications for state-of-the-art domains, such as smart home and smart industry. Model-Driven Development (MDD) approaches are employed for the implementation of a Domain-Specific Language (DSL) that enables the evaluation of IoT and CPS applications, among others. The proposed methodology automates the development of behavior verification processes, allowing domain experts to focus on the real problem, instead of struggling with technical and technological breaches. Through comparative scenario-based analysis and 43 detailed use cases, we illustrate how the proposed methodology automates the development of behavior verification processes, allowing end-users to focus on the verification definition, instead of technical and technological intricacies.
LocSys: A Low-Code Paradigm for the Development of Cyber-Physical Applications
Application development for the cyber-physical systems (CPS) domain is considered a quite complex procedure, since it not only requires a high level of expertise but also deep knowledge of heterogeneous domains. On the other hand, modern low-code solutions and DSLs are developed to offload domain complexity by developing models at a higher level of abstraction. In this work we propose an approach based on multiple high-level domain-specific languages (DSLs) as the vehicle to alleviate the developers from the intricacies of the CPS domain, enabling them to easily design and develop different layers (e.g., device, system or application layers) and aspects (e.g., automation processes, observation or monitoring dashboards) of a CPS. The materialized outcome of our approach is the LocSys platform, which allows the integration of DSLs, the development and management of models, and the development of pipelines of transformations between DSL models in a uniform platform, covering different aspects of complex domains. The efficacy of this approach was evaluated during a workshop that included more than 80 participants, with varying levels of expertise and experience in the field. The workshop documented the usability and acceptance of the study using SUS measurements. Preliminary findings suggest that the multi-DSL approach is highly usable (average SUS score 80.65, A− grade) and has been well received by non-domain experts. These results are promising, as they indicate that the LocSys platform can be successfully implemented to build smart environments with embedded automation processes and monitoring dashboards.
A Multilayer Architecture towards the Development and Distribution of Multimodal Interface Applications on the Edge
Today, Smart Assistants (SAs) are supported by significantly improved Natural Language Processing (NLP) and Natural Language Understanding (NLU) engines as well as AI-enabled decision support, enabling efficient information communication, easy appliance/device control, and seamless access to entertainment services, among others. In fact, an increasing number of modern households are being equipped with SAs, which promise to enhance user experience in the context of smart environments through verbal interaction. Currently, the market in SAs is dominated by products manufactured by technology giants that provide well designed off-the-shelf solutions. However, their simple setup and ease of use come with trade-offs, as these SAs abide by proprietary and/or closed-source architectures and offer limited functionality. Their enforced vendor lock-in does not provide (power) users with the ability to build custom conversational applications through their SAs. On the other hand, employing an open-source approach for building and deploying an SA (which comes with a significant overhead) necessitates expertise in multiple domains and fluency in the multimodal technologies used to build the envisioned applications. In this context, this paper proposes a methodology for developing and deploying conversational applications on the edge on top of an open-source software and hardware infrastructure via a multilayer architecture that simplifies low-level complexity and reduces learning overhead. The proposed approach facilitates the rapid development of applications by third-party developers, thereby enabling the establishment of a marketplace of customized applications aimed at the smart assisted living domain, among others. The supporting framework supports application developers, device owners, and ecosystem administrators in building, testing, uploading, and deploying applications, remotely controlling devices, and monitoring device performance. A demonstration of this methodology is presented and discussed focusing on health and assisted living applications for the elderly.
A Directory of Datasets for Mining Software Repositories
The amount of software engineering data is constantly growing, as more and more developers employ online services to store their code, keep track of bugs, or even discuss issues. The data residing in these services can be mined to address different research challenges; therefore, certain initiatives have been established to encourage sharing research datasets collecting them. In this work, we investigate the effect of such an initiative; we create a directory that includes the papers and the corresponding datasets of the data track of the Mining Software Engineering (MSR) conference. Specifically, our directory includes metadata and citation information for the papers of all data tracks, throughout the last twelve years. We also annotate the datasets according to the data source and further assess their compliance to the FAIR principles. Using our directory, researchers can find useful datasets for their research, or even design methodologies for assessing their quality, especially in the software engineering domain. Moreover, the directory can be used for analyzing the citations of data papers, especially with regard to different data categories, as well as for examining their FAIRness score throughout the years, along with its effect on the usage/citation of the datasets.
Autonomous Full 3D Coverage Using an Aerial Vehicle, Performing Localization, Path Planning, and Navigation towards Indoors Inventorying for the Logistics Domain
Over the last years, a rapid evolution of unmanned aerial vehicle (UAV) usage in various applications has been observed. Their use in indoor environments requires a precise perception of the surrounding area, immediate response to its changes, and, consequently, a robust position estimation. This paper provides an implementation of navigation algorithms for solving the problem of fast, reliable, and low-cost inventorying in the logistics industry. The drone localization is achieved with a particle filter algorithm that uses an array of distance sensors and an inertial measurement unit (IMU) sensor. Navigation is based on a proportional–integral–derivative (PID) position controller that ensures an obstacle-free path within the known 3D map. As for the full 3D coverage, an extraction of the targets and then their final succession towards optimal coverage is performed. Finally, a series of experiments are carried out to examine the robustness of the positioning system using different motion patterns and velocities. At the same time, various ways of traversing the environment are examined by using different configurations of the sensor that is used to perform the area coverage.
A Dynamic Hypergraph-Based Encoder–Decoder Risk Model for Longitudinal Predictions of Knee Osteoarthritis Progression
Knee osteoarthritis (KOA) is a most prevalent chronic muscoloskeletal disorder causing pain and functional impairment. Accurate predictions of KOA evolution are important for early interventions and preventive treatment planning. In this paper, we propose a novel dynamic hypergraph-based risk model (DyHRM) which integrates the encoder–decoder (ED) architecture with hypergraph convolutional neural networks (HGCNs). The risk model is used to generate longitudinal forecasts of KOA incidence and progression based on the knee evolution at a historical stage. DyHRM comprises two main parts, namely the dynamic hypergraph gated recurrent unit (DyHGRU) and the multi-view HGCN (MHGCN) networks. The ED-based DyHGRU follows the sequence-to-sequence learning approach. The encoder first transforms a knee sequence at the historical stage into a sequence of hidden states in a latent space. The Attention-based Context Transformer (ACT) is designed to identify important temporal trends in the encoder’s state sequence, while the decoder is used to generate sequences of KOA progression, at the prediction stage. MHGCN conducts multi-view spatial HGCN convolutions of the original knee data at each step of the historic stage. The aim is to acquire more comprehensive feature representations of nodes by exploiting different hyperedges (views), including the global shape descriptors of the cartilage volume, the injury history, and the demographic risk factors. In addition to DyHRM, we also propose the HyGraphSMOTE method to confront the inherent class imbalance problem in KOA datasets, between the knee progressors (minority) and non-progressors (majority). Embedded in MHGCN, the HyGraphSMOTE algorithm tackles data balancing in a systematic way, by generating new synthetic node sequences of the minority class via interpolation. Extensive experiments are conducted using the Osteoarthritis Initiative (OAI) cohort to validate the accuracy of longitudinal predictions acquired by DyHRM under different definition criteria of KOA incidence and progression. The basic finding of the experiments is that the larger the historic depth, the higher the accuracy of the obtained forecasts ahead. Comparative results demonstrate the efficacy of DyHRM against other state-of-the-art methods in this field.
AML4S: An AutoML Pipeline for Data Streams
The data landscape has changed, as more and more information is produced in the form of continuous data streams instead of stationary datasets. In this context, several online machine learning techniques have been proposed with the aim of automatically adapting to changes in data distributions, known as drifts. Though effective in certain scenarios, contemporary techniques do not generalize well to different types of data, while they also require manual parameter tuning, thus significantly hindering their applicability. Moreover, current methods do not thoroughly address drifts, as they mostly focus on concept drifts (distribution shifts on the target variable) and not on data drifts (changes in feature distributions). To confront these challenges, in this paper, we propose an AutoML Pipeline for Streams (AML4S), which automates the choice of preprocessing techniques, the choice of machine learning models, and the tuning of hyperparameters. Our pipeline further includes a drift detection mechanism that identifies different types of drifts, therefore continuously adapting the underlying models. We assess our pipeline on several real and synthetic data streams, including a data stream that we crafted to focus on data drifts. Our results indicate that AML4S produces robust pipelines and outperforms existing online learning or AutoML algorithms.
A Novel Approach Based on Hypergraph Convolutional Neural Networks for Cartilage Shape Description and Longitudinal Prediction of Knee Osteoarthritis Progression
Knee osteoarthritis (KOA) is a highly prevalent muscoloskeletal joint disorder affecting a significant portion of the population worldwide. Accurate predictions of KOA progression can assist clinicians in drawing preventive strategies for patients. In this paper, we present an integrated approach based on hypergraph convolutional networks (HGCNs) for longitudinal predictions of KOA grades and progressions from MRI images. We propose two novel models, namely, the C_Shape.Net and the predictor network. The C_Shape.Net operates on a hypergraph of volumetric nodes, especially designed to represent the surface and volumetric features of the cartilage. It encompasses deep HGCN convolutions, graph pooling, and readout operations in a hierarchy of layers, providing, at the output, expressive 3D shape descriptors of the cartilage volume. The predictor is a spatio-temporal HGCN network (ST_HGCN), following the sequence-to-sequence learning scheme. Concretely, it transforms sequences of knee representations at the historical stage into sequences of KOA predictions at the prediction stage. The predictor includes spatial HGCN convolutions, attention-based temporal fusion of feature embeddings at multiple layers, and a transformer module that generates longitudinal predictions at follow-up times. We present comprehensive experiments on the Osteoarthritis Initiative (OAI) cohort to evaluate the performance of our methodology for various tasks, including node classification, longitudinal KL grading, and progression. The basic finding of the experiments is that the larger the depth of the historical stage, the higher the accuracy of the obtained predictions in all tasks. For the maximum historic depth of four years, our method yielded an average balanced accuracy (BA) of 85.94% in KOA grading, and accuracies of 91.89% (+1), 88.11% (+2), 84.35% (+3), and 79.41% (+4) for the four consecutive follow-up visits. Under the same setting, we also achieved an average value of Area Under Curve (AUC) of 0.94 for the prediction of progression incidence, and follow-up AUC values of 0.81 (+1), 0.77 (+2), 0.73 (+3), and 0.68 (+4), respectively.
Subjective Cognitive Impairment Can Be Detected from the Decline of Complex Cognition: Findings from the Examination of Remedes 4 Alzheimer’s (R4Alz) Structural Validity
R4Alz is utilized for the early detection of minor neurocognitive disorders. It was designed to assess three main dimensions of cognitive-control abilities: working-memory capacity, attentional control, and executive functioning. Objectives: To reveal the cognitive-control dimensions that can differentiate between adults and older adults with healthy cognition, people with subjective cognitive impairment, and people diagnosed with mild cognitive impairment by examining the factorial structure of the R4Alz tool. Methods: The study comprised 404 participants: (a) healthy adults (n = 192), (b) healthy older adults (n = 29), (c) people with SCI (n = 74), and (d) people diagnosed with MCI (n = 109). The R4Alz battery was administered to all participants, including tests that assess short-term memory storage, information processing, information updating in working memory, and selective, sustained and divided attention), task/rule-switching, inhibitory control, and cognitive flexibility. Results: A two-factorial structural model was confirmed for R4Alz, with the first factor representing “fluid intelligence (FI)” and the second factor reflecting “executive functions (EF)”. Both FI and EFs discriminate among all groups. Conclusions: The R4Alz battery presents sound construct validity, evaluating abilities in FI and EF. Both abilities can differentiate very early cognitive impairment (SCI) from healthy cognitive aging and MCI.
R4Alz-Revised: A Tool Able to Strongly Discriminate ‘Subjective Cognitive Decline’ from Healthy Cognition and ‘Minor Neurocognitive Disorder’
Background: The diagnosis of the minor neurocognitive diseases in the clinical course of dementia before the clinical symptoms’ appearance is the holy grail of neuropsychological research. The R4Alz battery is a novel and valid tool that was designed to assess cognitive control in people with minor cognitive disorders. The aim of the current study is the R4Alz battery’s extension (namely R4Alz-R), enhanced by the design and administration of extra episodic memory tasks, as well as extra cognitive control tasks, towards improving the overall R4Alz discriminant validity. Methods: The study comprised 80 people: (a) 20 Healthy adults (HC), (b) 29 people with Subjective Cognitive Decline (SCD), and (c) 31 people with Mild Cognitive Impairment (MCI). The groups differed in age and educational level. Results: Updating, inhibition, attention switching, and cognitive flexibility tasks discriminated SCD from HC (p ≤ 0.003). Updating, switching, cognitive flexibility, and episodic memory tasks discriminated SCD from MCI (p ≤ 0.001). All the R4Alz-R’s tasks discriminated HC from MCI (p ≤ 0.001). The R4Alz-R was free of age and educational level effects. The battery discriminated perfectly SCD from HC and HC from MCI (100% sensitivity—95% specificity and 100% sensitivity—90% specificity, respectively), whilst it discriminated excellently SCD from MCI (90.3% sensitivity—82.8% specificity). Conclusion: SCD seems to be stage a of neurodegeneration since it can be objectively evaluated via the R4Alz-R battery, which seems to be a useful tool for early diagnosis.