Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,386 result(s) for "DATABASE MANAGEMENT TOOLS"
Sort by:
Practical guidance for defining a smart grid modernization strategy
This report provides some practical guidance on how utilities can define their own smart grid vision, identify priorities, and structure investment plans. While most of these strategic aspects apply to any area of the electricity grid, the document focuses on the segment of distribution. The guidance includes key building blocks that are needed to modernize the distribution grid and provides examples of grid modernization projects. Potential benefits that can be achieved (in monetary terms) for a given investment range are also discussed. The concept of the smart grid is relevant to any grid regardless of its stage of development. What varies are the magnitude and type of the incremental steps toward modernization that will be required to achieve a specific smart grid vision. Importantly, a utility that is at a relatively low level of grid modernization may leap frog one or more levels of modernization to achieve some of the benefits offered by the highest levels of grid modernization. Smart grids impact electric distribution systems significantly and sometimes more than any other part of the electric power grid. In developing countries, modernizing the distribution grid promises to benefit the operation of electric distribution utilities in many and various ways. These benefits include improved operational efficiency (reduced losses, lower energy consumption, amongst others), reduced peak demand, improved service reliability, and ability to accommodate distributed generating resources without adversely impacting overall power quality. Benefits of distribution grid modernization also include improved asset utilization (allowing operators to 'squeeze' more capacity out of existing assets) and workforce productivity improvement. These benefits can provide more than enough monetary gain for electric utility stakeholders in developing countries to offset the cost of grid modernization. Finally the report describes some funding and regulatory issues that may need to be taken into account when developing smart grid plans.
An open-access database and analysis tool for perovskite solar cells based on the FAIR data principles
Large datasets are now ubiquitous as technology enables higher-throughput experiments, but rarely can a research field truly benefit from the research data generated due to inconsistent formatting, undocumented storage or improper dissemination. Here we extract all the meaningful device data from peer-reviewed papers on metal-halide perovskite solar cells published so far and make them available in a database. We collect data from over 42,400 photovoltaic devices with up to 100 parameters per device. We then develop open-source and accessible procedures to analyse the data, providing examples of insights that can be gleaned from the analysis of a large dataset. The database, graphics and analysis tools are made available to the community and will continue to evolve as an open-source initiative. This approach of extensively capturing the progress of an entire field, including sorting, interactive exploration and graphical representation of the data, will be applicable to many fields in materials science, engineering and biosciences. Making large datasets findable, accessible, interoperable and reusable could accelerate technology development. Now, Jacobsson et al. present an approach to build an open-access database and analysis tool for perovskite solar cells.
Business modeling and data mining
Business Modeling and Data Mining demonstrates how real world business problems can be formulated so that data mining can answer them.The concepts and techniques presented in this book are the essential building blocks in understanding what models are and how they can be used practically to reveal hidden assumptions and needs, determine problems.
A global analysis of management capacity and ecological outcomes in terrestrial protected areas
Protecting important sites is a key strategy for halting the loss of biodiversity. However, our understanding of the relationship between management inputs and biodiversity outcomes in protected areas (PAs) remains weak. Here, we examine biodiversity outcomes using species population trends in PAs derived from the Living Planet Database in relation to management data derived from the Management Effectiveness Tracking Tool (METT) database for 217 population time‐series from 73 PAs. We found a positive relationship between our METT‐based scores for Capacity and Resources and changes in vertebrate abundance, consistent with the hypothesis that PAs require adequate resourcing to halt biodiversity loss. Additionally, PA age was negatively correlated with trends for the mammal subsets and PA size negatively correlated with population trends in the global subset. Our study highlights the paucity of appropriate data for rigorous testing of the role of management in maintaining species populations across multiple sites, and describes ways to improve our understanding of PA performance.
Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text
The proliferation of artificial intelligence (AI)-generated content, particularly from models like ChatGPT, presents potential challenges to academic integrity and raises concerns about plagiarism. This study investigates the capabilities of various AI content detection tools in discerning human and AI-authored content. Fifteen paragraphs each from ChatGPT Models 3.5 and 4 on the topic of cooling towers in the engineering process and five human-witten control responses were generated for evaluation. AI content detection tools developed by OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag were used to evaluate these paragraphs. Findings reveal that the AI detection tools were more accurate in identifying content generated by GPT 3.5 than GPT 4. However, when applied to human-written control responses, the tools exhibited inconsistencies, producing false positives and uncertain classifications. This study underscores the need for further development and refinement of AI content detection tools as AI-generated content becomes more sophisticated and harder to distinguish from human-written text.
When large language models meet personalization: perspectives of challenges and opportunities
The advent of large language models marks a revolutionary breakthrough in artificial intelligence. With the unprecedented scale of training and model parameters, the capability of large language models has been dramatically improved, leading to human-like performances in understanding, language synthesizing, common-sense reasoning, etc. Such a major leap forward in general AI capacity will fundamentally change the pattern of how personalization is conducted. For one thing, it will reform the way of interaction between humans and personalization systems. Instead of being a passive medium of information filtering, like conventional recommender systems and search engines, large language models present the foundation for active user engagement. On top of such a new foundation, users’ requests can be proactively explored, and users’ required information can be delivered in a natural, interactable, and explainable way. For another thing, it will also considerably expand the scope of personalization, making it grow from the sole function of collecting personalized information to the compound function of providing personalized services. By leveraging large language models as a general-purpose interface, the personalization systems may compile user’s requests into plans, calls the functions of external tools (e.g., search engines, calculators, service APIs, etc.) to execute the plans, and integrate the tools’ outputs to complete the end-to-end personalization tasks. Today, large language models are still being rapidly developed, whereas the application in personalization is largely unexplored. Therefore, we consider it to be right the time to review the challenges in personalization and the opportunities to address them with large language models. In particular, we dedicate this perspective paper to the discussion of the following aspects: the development and challenges for the existing personalization system, the newly emerged capabilities of large language models, and the potential ways of making use of large language models for personalization.
Construction of a Multi-Omics database for Paeonia lactiflora: A resource for comprehensive data integration and analysis
Background Paeonia lactiflora Pall. is a traditional medicinal plant widely used in East Asia, particularly for its roots, which are processed into various herbal remedies. With the advancement of omics technologies, significant genomic, transcriptomic, proteomic, and metabolomic data related to P. lactiflora have been generated. To facilitate the utilization of this wealth of information for research and applications, a multi-omics database specific to P. lactiflora was developed. Results This comprehensive multi-omics database includes genomic, transcriptomic, and proteomic datasets, as well as chemical compound profiles identified in various tissues and growth stages. The database also features data on key biosynthetic pathways, including those associated with monoterpenoid glycosides such as paeoniflorin, and provides tools for analyzing protein structures and interactions. Additionally, it summarizes P. lactiflora ’s major active compounds, and highlights reported pharmacological effects. The database is organized into key functional modules: Home, Genome, Transcriptome, Proteome, Tools, Biosynthetic Pathways, Chemical Compounds, and Publications. Notably, the “Tools” module supports sequence alignment, pathway enrichment analysis (including Kyoto Encyclopedia of Genes and Genomes, KEGG), protein structure prediction, and primer design. Conclusions The multi-omics database (URL: http://210.22.121.250:8888/cosd/home/indexPage ) of P. lactiflora integrates extensive molecular and chemical data, providing a robust platform for researchers. It serves as a valuable resource for advancing studies on the cultivation, breeding, and molecular pharmacognosy of P. lactiflora and supports the development of its medicinal applications.
A comprehensive survey of anomaly detection techniques for high dimensional big data
Anomaly detection in high dimensional data is becoming a fundamental research problem that has various applications in the real world. However, many existing anomaly detection techniques fail to retain sufficient accuracy due to so-called “big data” characterised by high-volume, and high-velocity data generated by variety of sources. This phenomenon of having both problems together can be referred to the “curse of big dimensionality,” that affect existing techniques in terms of both performance and accuracy. To address this gap and to understand the core problem, it is necessary to identify the unique challenges brought by the anomaly detection with both high dimensionality and big data problems. Hence, this survey aims to document the state of anomaly detection in high dimensional big data by representing the unique challenges using a triangular model of vertices: the problem (big dimensionality), techniques/algorithms (anomaly detection), and tools (big data applications/frameworks). Authors’ work that fall directly into any of the vertices or closely related to them are taken into consideration for review. Furthermore, the limitations of traditional approaches and current strategies of high dimensional data are discussed along with recent techniques and applications on big data required for the optimization of anomaly detection.
Part of speech tagging: a systematic review of deep learning and machine learning approaches
Natural language processing (NLP) tools have sparked a great deal of interest due to rapid improvements in information and communications technologies. As a result, many different NLP tools are being produced. However, there are many challenges for developing efficient and effective NLP tools that accurately process natural languages. One such tool is part of speech (POS) tagging, which tags a particular sentence or words in a paragraph by looking at the context of the sentence/words inside the paragraph. Despite enormous efforts by researchers, POS tagging still faces challenges in improving accuracy while reducing false-positive rates and in tagging unknown words. Furthermore, the presence of ambiguity when tagging terms with different contextual meanings inside a sentence cannot be overlooked. Recently, Deep learning (DL) and Machine learning (ML)-based POS taggers are being implemented as potential solutions to efficiently identify words in a given sentence across a paragraph. This article first clarifies the concept of part of speech POS tagging. It then provides the broad categorization based on the famous ML and DL techniques employed in designing and implementing part of speech taggers. A comprehensive review of the latest POS tagging articles is provided by discussing the weakness and strengths of the proposed approaches. Then, recent trends and advancements of DL and ML-based part-of-speech-taggers are presented in terms of the proposed approaches deployed and their performance evaluation metrics. Using the limitations of the proposed approaches, we emphasized various research gaps and presented future recommendations for the research in advancing DL and ML-based POS tagging.
EviAtlas: a tool for visualising evidence synthesis databases
Systematic mapping assesses the nature of an evidence base, answering how much evidence exists on a particular topic. Perhaps the most useful outputs of a systematic map are an interactive database of studies and their meta-data, along with visualisations of this database. Despite the rapid increase in systematic mapping as an evidence synthesis method, there is currently a lack of Open Source software for producing interactive visualisations of systematic map databases. In April 2018, as attendees at and coordinators of the first ever Evidence Synthesis Hackathon in Stockholm, we decided to address this issue by developing an R-based tool called EviAtlas, an Open Access (i.e. free to use) and Open Source (i.e. software code is freely accessible and reproducible) tool for producing interactive, attractive tables and figures that summarise the evidence base. Here, we present our tool which includes the ability to generate vital visualisations for systematic maps and reviews as follows: a complete data table; a spatially explicit geographical information system (Evidence Atlas); Heat Maps that cross-tabulate two or more variables and display the number of studies belonging to multiple categories; and standard descriptive plots showing the nature of the evidence base, for example the number of studies published per year or number of studies per country. We believe that EviAtlas will provide a stimulus for the development of other exciting tools to facilitate evidence synthesis.