Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
16,418 result(s) for "digital database"
Sort by:
A novel machine learning approach on texture analysis for automatic breast microcalcification diagnosis classification of mammogram images
Purpose Screening programs use mammography as a diagnostic tool for the early detection of breast cancer. Mammogram enhancement is used to increase the local contrast of the mammogram so that the lesions are more visible in the advanced image. For accurate diagnosis in the early stage of breast cancer, the appearance of masses and microcalcification on the mammographic image are two important indicators. The objective of this study was to evaluate the feasibility of the automatic separation of images of breast tissue microcalcifications and also to evaluate its accuracy. Methods The research was carried out by using two techniques of image enhancement and highlighting of breast tissue microcalcifications for the desired areas by regional ROI based on fuzzy system and also Gabor filtering method. After determining the clusters of breast tissue microcalcifications, the clusters are classified using the decision tree classification algorithm. Then, for segmentation, samples suspected of microcalcification are highlighted and masked, and in the last stage, tissue characteristics are extracted. Subsequently, with the help of an artificial neural network (ANN), determining the benign and malignant types of segmented ROI clusters was accomplished. The proposed system is trained with a Digital Database for Screening Mammography (DDSM) developed by the University of South Florida, USA, and the simulations are performed under MATLAB software and the results are compared with previous work. Results The results of this training performed under this work show an accuracy of 93% and an improvement of sensitivity above 95%. Conclusion The result indicates that the proposed approach can be applied to ensure breast cancer diagnosis.
Convolutional Neural Network for Remote-Sensing Scene Classification: Transfer Learning Analysis
Remote-sensing image scene classification can provide significant value, ranging from forest fire monitoring to land-use and land-cover classification. Beginning with the first aerial photographs of the early 20th century to the satellite imagery of today, the amount of remote-sensing data has increased geometrically with a higher resolution. The need to analyze these modern digital data motivated research to accelerate remote-sensing image classification. Fortunately, great advances have been made by the computer vision community to classify natural images or photographs taken with an ordinary camera. Natural image datasets can range up to millions of samples and are, therefore, amenable to deep-learning techniques. Many fields of science, remote sensing included, were able to exploit the success of natural image classification by convolutional neural network models using a technique commonly called transfer learning. We provide a systematic review of transfer learning application for scene classification using different datasets and different deep-learning models. We evaluate how the specialization of convolutional neural network models affects the transfer learning process by splitting original models in different points. As expected, we find the choice of hyperparameters used to train the model has a significant influence on the final performance of the models. Curiously, we find transfer learning from models trained on larger, more generic natural images datasets outperformed transfer learning from models trained directly on smaller remotely sensed datasets. Nonetheless, results show that transfer learning provides a powerful tool for remote-sensing scene classification.
Species interactions: next‐level citizen science
We envisage a future research environment where digital data on species interactions are easily accessible and comprehensively cover all species, life stages and habitats. To achieve this goal, we need data from many sources, including the largely untapped potential of citizen science for mobilising and utilising existing information on species interactions. Traditionally volunteers contributing information on the occurrence of species have focused on single‐species observations from within one target taxon. We make recommendations on how to improve the gathering of species interaction data through citizen science, which data should be collected and how it can be motivated. These recommendations include providing feedback in the form of network visualisations, leveraging a wide variety of other data sources and eliciting an emotional connection to the species in question. There are many uses for these data, but in the context of biological invasions, information on species interactions will increase understanding of the effects of invasive alien species on recipient communities and ecosystems. We believe that the inclusion of ecological networks as a concept within citizen science, not only for initiatives focussed on biological invasions but also across other ecological themes, will not only enrich scientific knowledge on species interactions but also deepen the experience and enjoyment of citizens themselves.
WRKY transcription factors: evolution, regulation, and functional diversity in plants
The recent advancements in sequencing technologies and informatic tools promoted a paradigm shift to decipher the hidden biological mysteries and transformed the biological issues into digital data to express both qualitative and quantitative forms. The transcriptomic approach, in particular, has added new dimensions to the versatile essence of plant genomics through the large and deep transcripts generated in the process. This has enabled the mining of super families from the sequenced plants, both model and non-model, understanding their ancestry, diversity, and evolution. The elucidation of the crystal structure of the WRKY proteins and recent advancement in computational prediction through homology modeling and molecular dynamic simulation has provided an insight into the DNA–protein complex formation, stability, and interaction, thereby giving a new dimension in understanding the WRKY regulation. The present review summarizes the functional aspects of the high volume of sequence data of WRKY transcription factors studied from different species, till date. The review focuses on the dynamics of structural classification and lineage in light of the recent information. Additionally, a comparative analysis approach was incorporated to understand the functions of the identified WRKY transcription factors subjected to abiotic (heat, cold, salinity, senescence, dark, wounding, UV, and carbon starvation) stresses as revealed through various sets of studies on different plant species. The review will be instrumental in understanding the events of evolution and the importance of WRKY TFs under the threat of climate change, considering the new scientific evidences to propose a fresh perspective.
Buying drugs on a Darknet market: A better deal? Studying the online illicit drug market through the analysis of digital, physical and chemical data
•The Evolution cryptomarket is described through the analysis of source code files.•Illicit drug orders on Evolution and chemical analyses are performed.•The study of packaging reveals concealment techniques used to avoid detection.•Products purity does not correspond with information provided on listings.•Chemical profiling reveals a relationship between purchases and police seizures. Darknet markets, also known as cryptomarkets, are websites located on the Darknet and designed to allow the trafficking of illicit products, mainly drugs. This study aims at presenting the added value of combining digital, chemical and physical information to reconstruct sellers’ activities. In particular, this research focuses on Evolution, one of the most popular cryptomarkets active from January 2014 to March 2015. Evolution source code files were analysed using Python scripts based on regular expressions to extract information about listings (i.e., sales proposals) and sellers. The results revealed more than 48,000 listings and around 2700 vendors claiming to send illicit drug products from 70 countries. The most frequent categories of illicit drugs offered by vendors were cannabis-related products (around 25%) followed by ecstasy (MDA, MDMA) and stimulants (cocaine, speed). The cryptomarket was then especially studied from a Swiss point of view. Illicit drugs were purchased from three sellers located in Switzerland. The purchases were carried out to confront digital information (e.g., the type of drug, the purity, the shipping country and the concealment methods mentioned on listings) with the physical analysis of the shipment packaging and the chemical analysis of the received product (purity, cutting agents, chemical profile based on minor and major alkaloids, chemical class). The results show that digital information, such as concealment methods and shipping country, seems accurate. But the illicit drugs purity is found to be different from the information indicated on their respective listings. Moreover, chemical profiling highlighted links between cocaine sold online and specimens seized in Western Switzerland. This study highlights that (1) the forensic analysis of the received products allows the evaluation of the accuracy of digital data collected on the website, and (2) the information from digital and physical/chemical traces are complementary to evaluate the practices of the online selling of illicit drugs on cryptomarkets.
Studying illicit drug trafficking on Darknet markets: Structure and organisation from a Canadian perspective
•Study of illicit drug trafficking using data collected on 8 cryptomarkets.•Knowledge on the structure and organisation of distribution networks.•Use of an approach combining the analysis of vendor names and PGP keys.•Results reveal the presence of key actors of the Canadian illicit drug trafficking. Cryptomarkets are online marketplaces that are part of the Dark Web and mainly devoted to the sale of illicit drugs. They combine tools to ensure anonymity of participants with the delivery of products by mail to enable the development of illicit drug trafficking. Using data collected on eight cryptomarkets, this study provides an overview of the Canadian illicit drug market. It seeks to inform about the most prevalent illicit drugs vendors offer for sale and preferred destination countries. Moreover, the research gives an insight into the structure and organisation of distribution networks existing online. In particular, we provide information about how vendors are diversifying and replicating across marketplaces. We inform on the number of listings each vendor manages, the number of cryptomarkets they are active on and the products they offer. This research demonstrates the importance of online marketplaces in the context of illicit drug trafficking. It shows how the analysis of data available online may elicit knowledge on criminal activities. Such knowledge is mandatory to design efficient policy for monitoring or repressive purposes against anonymous marketplaces. Nevertheless, trafficking on Dark Net markets is difficult to analyse based only on digital data. A more holistic approach for investigating this crime problem should be developed. This should rely on a combined use and interpretation of digital and physical data within a single collaborative intelligence model.
Digital transformation risk management in forensic science laboratories
•Examples of digital transformation difficulties encountered in laboratory environments.•Robust risk mitigation strategies for managing digital transformations in forensic laboratories.•Forensic digital preparedness applied to forensic laboratories.•Role of digital forensic expertise in risk management of digital transformations in laboratories.•Recommended enhances to international quality standards such as ISO/IEC 17025. Technological advances are changing how forensic laboratories operate in all forensic disciplines, not only digital. Computers support workflow management, enable evidence analysis (physical and digital), and new technology enables previously unavailable forensic capabilities. Used properly, the integration of digital systems supports greater efficiency and reproducibility, and drives digital transformation of forensic laboratories. However, without the necessary preparations, these digital transformations can undermine the core principles and processes of forensic laboratories. Pertinent examples of problems involving technology that have occurred in laboratories are provided, along with opportunities and risk mitigation strategies, based on the authors’ experiences. Forensic preparedness concentrating on digital data reduces the cost and operational disruption of responding to various kinds of problems, including misplaced exhibits, allegations of employee misconduct, disclosure requirements, and information security breaches. This work presents recommendations to help forensic laboratories prepare for and manage these risks, to use technology effectively, and ultimately strengthen forensic science. The importance of involving digital forensic expertise in risk management of digital transformations in laboratories is emphasized. Forensic laboratories that do not adopt forensic digital preparedness will produce results based on digital data and processes that cannot be verified independently, leaving them vulnerable to challenge. The recommendations in this work could enhance international standards such as ISO/IEC 17025 used to assess and accredit laboratories.
How to make DNA data storage more applicable
The high-capacity and long-lasting storage features of DNA have enabled researchers to demonstrate its data storage capabilities.Current challenges becoming apparent are high storage costs and slow retrieval of the data.Enzymatic read-in and read-out hold promise for lower prices for DNA synthesis and data retrieval.New fast read-out options include direct optic readout and sequence-mediated signals.Graphene for fast sequencing along with DNA wires and nanocellulose as chassis boost application. The storage of digital data is becoming a worldwide problem. DNA has been recognized as a biological solution due to its ability to store genetic information without alteration over long periods. The first demonstrations of high-capacity long-lasting DNA digital data storage have been shown. However, high storage costs and slow retrieval of the data must be overcome to make DNA data storage more applicable and marketable. Herein, we discuss the issues and recent advances in DNA data storage methods and highlight pathways to make this technology more applicable to real-world digital data storage. We envision that a combination of molecular biology, nanotechnology, novel polymers, electronics, and automation with systematic development will allow DNA data storage sufficient for everyday use. The storage of digital data is becoming a worldwide problem. DNA has been recognized as a biological solution due to its ability to store genetic information without alteration over long periods. The first demonstrations of high-capacity long-lasting DNA digital data storage have been shown. However, high storage costs and slow retrieval of the data must be overcome to make DNA data storage more applicable and marketable. Herein, we discuss the issues and recent advances in DNA data storage methods and highlight pathways to make this technology more applicable to real-world digital data storage. We envision that a combination of molecular biology, nanotechnology, novel polymers, electronics, and automation with systematic development will allow DNA data storage sufficient for everyday use.
Unintended Side Effects of the Digital Transition: European Scientists’ Messages from a Proposition-Based Expert Round Table
We present the main messages of a European Expert Round Table (ERT) on the unintended side effects (unseens) of the digital transition. Seventeen experts provided 42 propositions from ten different perspectives as input for the ERT. A full-day ERT deliberated communalities and relationships among these unseens and provided suggestions on (i) what the major unseens are; (ii) how rebound effects of digital transitioning may become the subject of overarching research; and (iii) what unseens should become subjects of transdisciplinary theory and practice processes for developing socially robust orientations. With respect to the latter, the experts suggested that the “ownership, economic value, use and access of data” and, related to this, algorithmic decision-making call for transdisciplinary processes that may provide guidelines for key stakeholder groups on how the responsible use of digital data can be developed. A cluster-based content analysis of the propositions, the discussion and inputs of the ERT, and a theoretical analysis of major changes to levels of human systems and the human–environment relationship resulted in the following greater picture: The digital transition calls for redefining economy, labor, democracy, and humanity. Artificial Intelligence (AI)-based machines may take over major domains of human labor, reorganize supply chains, induce platform economics, and reshape the participation of economic actors in the value chain. (Digital) Knowledge and data supplement capital, labor, and natural resources as major economic variables. Digital data and technologies lead to a post-fuel industry (post-) capitalism. Traditional democratic processes can be (intentionally or unintentionally) altered by digital technologies. The unseens in this field call for special attention, research and management. Related to the conditions of ontogenetic and phylogenetic development (humanity), the ubiquitous, global, increasingly AI-shaped interlinkage of almost every human personal, social, and economic activity and the exposure to indirect, digital, artificial, fragmented, electronically mediated data affect behavioral, cognitive, psycho-neuro-endocrinological processes on the level of the individual and thus social relations (of groups and families) and culture, and thereby, the essential quality and character of the human being (i.e., humanity). The findings suggest a need for a new field of research, i.e., focusing on sustainable digital societies and environments, in which the identification, analysis, and management of vulnerabilities and unseens emerging in the sociotechnical digital transition play an important role.
Aspects of public attention on the most mentioned nonindigenous species, as determined by a comprehensive assessment of Japanese social media
Invasion culturomics is an emerging field of study that utilizes digital data existing on the Internet to reveal the human dimensions of nonindigenous species (NIS). Although hypothetical approaches have been used to examine explanatory variables that predict the amount of public attention by using proxies, it has been difficult to observe direct associations between these variables. Here, by using the case study of NIS in Japan, we aimed to deepen our understanding of the relationship between people and NIS by analyzing the content of texts about NIS on social media, and by clarifying the context and aspects to which public attention is directed. Specifically, we quantified tweets about NIS to identify the most tweeted NIS that attract a lot of public attention on X/Twitter. We also identified topics in which NIS names occurred by applying topic modeling to tweets, and we investigated the topic distribution over the most tweeted NIS. 141 species were selected as the most tweeted NIS for further analysis, and 25 topics were identified from all the tweets used in the analysis. The topic distribution over the most tweeted NIS had three patterns across taxonomic groups: (1) biased among topics but consistent within taxonomic groups; (2) relatively even among topics and consistent within taxonomic groups; and (3) not consistent within taxonomic groups and with biases differing among species. These findings answer the question, “In what context, or in terms of what aspects, do NIS attract public attention?”, and provide insights into the formulation of better strategies for NIS management, particularly from the human and social perspectives.