Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
676 result(s) for "modeling metadata"
Sort by:
On data lake architectures and metadata management
Over the past two decades, we have witnessed an exponential increase of data production in the world. So-called big data generally come from transactional systems, and even more so from the Internet of Things and social media. They are mainly characterized by volume, velocity, variety and veracity issues. Big data-related issues strongly challenge traditional data management and analysis systems. The concept of data lake was introduced to address them. A data lake is a large, raw data repository that stores and manages all company data bearing any format. However, the data lake concept remains ambiguous or fuzzy for many researchers and practitioners, who often confuse it with the Hadoop technology. Thus, we provide in this paper a comprehensive state of the art of the different approaches to data lake design. We particularly focus on data lake architectures and metadata management, which are key issues in successful data lakes. We also discuss the pros and cons of data lakes and their design alternatives.
HBIM for Conservation: A New Proposal for Information Modeling
Thanks to its capability of archiving and organizing all the information about a building, HBIM (Historical Building Information Modeling) is considered a promising resource for planned conservation of historical assets. However, its usage remains limited and scarcely adopted by the subjects in charge of conservation, mainly because of its rather complex 3D modeling requirements and a lack of shared regulatory references and guidelines as far as semantic data are concerned. In this study, we developed an HBIM methodology to support documentation, management, and planned conservation of historic buildings, with particular focus on non-geometric information: organized and coordinated storage and management of historical data, easy analysis and query, time management, flexibility, user-friendliness, and information sharing. The system is based on a standalone specific-designed database linked to the 3D model of the asset, built with BIM software, and it is highly adaptable to different assets. The database is accessible both with a developed desktop application, which acts as a plug-in for the BIM software, and through a web interface, implemented to ensure data sharing and easy usability by skilled and unskilled users. The paper describes in detail the implemented system, passing by semantic breaking down of the building, database design, as well as system architecture and capabilities. Two case studies, the Cathedral of Parma and Ducal Palace of Mantua (Italy), are then presented to show the results of the system’s application.
La colección Literatura de Quiosco en Mnemosine, Biblioteca Digital de la Otra Edad de Plata (1868-1936): hacia la redefinición del canon literario a través de metadatos
Este artículo tiene un doble propósito: primero, se da a conocer la gestación y desarrollo de la Biblioteca Digital Mnemosine de la Edad de Plata (1868-1936); después, se justifica la necesidad de crear la colección Literatura de Quiosco dentro de dicha biblioteca y se explica qué metadatos son necesarios para una catalogación especializada de obras con el fin de satisfacer los propósitos de investigación de cualquier docente y usuario. Gracias a esta investigación experimental, se puede concluir que la gestación de un modelo reconfigurable de metadatos es fundamental en la construcción de un canon literario abierto, dinámico y que se adapte a las necesidades específicas de los lectores.
Personalisation for All: Making Adaptive Course Composition Easy
The goal of personalised eLearning is to support e-learning content, activities and collaboration, adapted to the specific needs and influenced by specific preferences of the learner and built on sound pedagogic strategies. One of the major challenges to the mainstream adoption of personalised eLearning is the complexity and time involved in composing the adaptive learning experience. The key goal in personalized eLearning development tools is to sup-port the teacher in composing adaptive and non-adaptive eLearning experiences. One of the arguments of this paper is that these learning experiences should be activityoriented and pedagogically driven. Presented is a detailed discussion of the challenges of composing adaptive courses and in particular the difficulties and possible techniques in composing appropriate models and information to support adaptive courses. The paper describes an adaptive course construction methodology which extends traditional eLearning syllabi development with design activities which support adaptivity definition, subject matter concept modelling, adaptivity technique selection as well as alternative instructional design template customisation. The paper then details the Adaptive Course Construction Toolkit (ACCT), which supports this methodology and illustrates the tools usage in the development of an adaptive course. Finally the paper presents an initial evaluation of the toolkit and its associated methodology.
SBML Level 3: an extensible format for the exchange and reuse of biological models
Systems biology has experienced dramatic growth in the number, size, and complexity of computational models. To reproduce simulation results and reuse models, researchers must exchange unambiguous model descriptions. We review the latest edition of the Systems Biology Markup Language (SBML), a format designed for this purpose. A community of modelers and software authors developed SBML Level 3 over the past decade. Its modular form consists of a core suited to representing reaction-based models and packages that extend the core with features suited to other model types including constraint-based models, reaction-diffusion models, logical network models, and rule-based models. The format leverages two decades of SBML and a rich software ecosystem that transformed how systems biologists build and interact with models. More recently, the rise of multi-scale models of whole cells and organs, and new data sources such as single-cell measurements and live imaging, has precipitated new ways of integrating data with models. We provide our perspectives on the challenges presented by these developments and how SBML Level provides the foundation needed to support this evolution.
A RESTORATION ORIENTED HBIM SYSTEM FOR CULTURAL HERITAGE DOCUMENTATION: THE CASE STUDY OF PARMA CATHEDRAL
The need to safeguard and preserve Cultural Heritage (CH) is increasing and especially in Italy, where the amount of historical buildings is considerable, having efficient and standardized processes of CH management and conservation becomes strategic. At the time being, there are no tools capable of fulfilling all the specific functions required by Cultural Heritage documentation and, due to the complexity of historical assets, there are no solution as flexible and customizable as CH specific needs require. Nevertheless, BIM methodology can represent the most effective solution, on condition that proper methodologies, tools and functions are made available. The paper describes an ongoing research on the implementation of a Historical BIM system for the Parma cathedral, aimed at the maintenance, conservation and restoration. Its main goal was to give a concrete answer to the lack of specific tools required by Cultural Heritage documentation: organized and coordinated storage and management of historical data, easy analysis and query, time management, 3D modelling of irregular shapes, flexibility, user-friendliness, etc. The paper will describe the project and the implemented methodology, focusing mainly on survey and modelling phases. In describing the methodology, critical issues about the creation of a HBIM will be highlighted, trying to outline a workflow applicable also in other similar contexts.
Instagram photos reveal predictive markers of depression
Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection. Resulting models outperformed general practitioners’ average unassisted diagnostic success rate for depression. These results held even when the analysis was restricted to posts made before depressed individuals were first diagnosed. Human ratings of photo attributes (happy, sad, etc.) were weaker predictors of depression, and were uncorrelated with computationally-generated features. These results suggest new avenues for early screening and detection of mental illness.
Batch effects removal for microbiome data via conditional quantile regression
Batch effects in microbiome data arise from differential processing of specimens and can lead to spurious findings and obscure true signals. Strategies designed for genomic data to mitigate batch effects usually fail to address the zero-inflated and over-dispersed microbiome data. Most strategies tailored for microbiome data are restricted to association testing or specialized study designs, failing to allow other analytic goals or general designs. Here, we develop the Conditional Quantile Regression (ConQuR) approach to remove microbiome batch effects using a two-part quantile regression model. ConQuR is a comprehensive method that accommodates the complex distributions of microbial read counts by non-parametric modeling, and it generates batch-removed zero-inflated read counts that can be used in and benefit usual subsequent analyses. We apply ConQuR to simulated and real microbiome datasets and demonstrate its advantages in removing batch effects while preserving the signals of interest. Here, the authors present ConQuR, a conditional quantile regression method that removes microbiome batch effects through non-parametric modeling of complex microbial read counts, while preserving the signals of interest.
Cyber Security Risk Modeling in Distributed Information Systems
This paper deals with problems of the development and security of distributed information systems. It explores the challenges of risk modeling in such systems and suggests a risk-modeling approach that is responsive to the requirements of complex, distributed, and large-scale systems. This article provides aggregate information on various risk assessment methodologies; such as quantitative, qualitative, and hybrid methods; a comparison of their advantages and disadvantages; as well as an analysis of the possibility of application in distributed information systems. It also presents research on a comprehensive, dynamic, and multilevel approach to cyber risk assessment and modeling in distributed information systems based on security metrics and techniques for their calculation, which provides sufficient accuracy and reliability of risk assessment and demonstrates an ability to solve problems of intelligent classification and risk assessment modeling for large arrays of distributed data. The paper considers the main issues and recommendations for using risk assessment techniques based on the suggested approach.
Transformers for Tabular Data Representation: A Survey of Models and Applications
In the last few years, the natural language processing community has witnessed advances in neural representations of free texts with transformer-based language models (LMs). Given the importance of knowledge available in tabular data, recent research efforts extend LMs by developing neural representations for structured data. In this article, we present a survey that analyzes these efforts. We first abstract the different systems according to a traditional machine learning pipeline in terms of training data, input representation, model training, and supported downstream tasks. For each aspect, we characterize and compare the proposed solutions. Finally, we discuss future work directions.