Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
328 result(s) for "DAT DATABASE"
Sort by:
Geospatial and Temporal Patterns of Natural and Man-Made (Technological) Disasters (1900–2024): Insights from Different Socio-Economic and Demographic Perspectives
This pioneering study explores the geospatial and temporal patterns of natural and human-induced disasters from 1900 to 2024, providing essential insights into their global distribution and impacts. Significant trends and disparities in disaster occurrences and their widespread consequences are revealed through the utilization of the comprehensive international EM-DAT database. The results showed a dramatic escalation in both natural and man-made (technological) disasters over the decades, with notable surges in the 1991–2000 and 2001–2010 periods. A total of 25,836 disasters were recorded worldwide, of which 69.41% were natural disasters (16,567) and 30.59% were man-made (technological) disasters (9269). The most significant increase in natural disasters occurred from 1961–1970, while man-made (technological) disasters surged substantially from 1981–1990. Seasonal trends reveal that floods peak in January and July, while storms are most frequent in June and October. Droughts and floods are the most devastating in terms of human lives, while storms and earthquakes cause the highest economic losses. The most substantial economic losses were reported during the 2001–2010 period, driven by catastrophic natural disasters in Asia and North America. Also, Asia was highlighted by our research as the most disaster-prone continent, accounting for 41.75% of global events, with 61.89% of these events being natural disasters. Oceania, despite experiencing fewer total disasters, shows a remarkable 91.51% of these as natural disasters. Africa is notable for its high incidence of man-made (technological) disasters, which constitute 43.79% of the continent’s disaster events. Europe, representing 11.96% of total disasters, exhibits a balanced distribution but tends towards natural disasters at 64.54%. Examining specific countries, China, India, and the United States emerged as the countries most frequently affected by both types of disasters. The impact of these disasters has been immense, with economic losses reaching their highest during the decade of 2010–2020, largely due to natural disasters. The human toll has been equally significant, with Asia recording the most fatalities and Africa the most injuries. Pearson’s correlation analysis identified statistically significant links between socioeconomic factors and the effects of disasters. It shows that nations with higher GDP per capita and better governance quality tend to experience fewer disasters and less severe negative consequences. These insights highlight the urgent need for tailored disaster risk management strategies that address the distinct challenges and impacts in various regions. By understanding historical disaster patterns, policymakers and stakeholders can better anticipate and manage future risks, ultimately safeguarding lives and economies.
What makes an epidemic a disaster: the future of epidemics within the EM-DAT International Disaster Database
Background Reporting on and monitoring epidemics is a public health priority. Several initiatives and platforms provide epidemiological data, such as the EM-DAT International Disaster Database, which has 1525 epidemics and their impact reported since 1900, including 892 epidemics between 2000 and 2023. However, EM-DAT has inconsistent coverage and deficiencies regarding the systematic monitoring of epidemics data due to the lack of a standardized methodology to define what will be included under an epidemic disaster. Methods We conducted a sequential study that included an online survey of experts in infectious diseases, public health emergencies, and related data, followed by committee discussions with disaster experts. This approach aimed to identify appropriate definitions and entry criteria for archiving disease outbreak events. Results The survey had 21 respondents from universities and international organizations, with experts primarily specialized in infectious disease surveillance. Experts agreed that epidemics should be considered as disasters. Experts cited challenges in defining epidemic thresholds. However, they proposed pathogen-based criteria and agreed that disruption to society, especially to the healthcare system, serves as a determinant of epidemic disasters. The experts favored deaths and confirmed cases as key indicators, alongside suggestions on refining EM-DAT's entry criteria and improving epidemic impact assessment. Discussion This article offers valuable insights into epidemic disasters, a topic previously underdefined in the literature, thereby enhancing understanding for policymakers and public health professionals.
Disaster Management Redefined: Integrating SVM-AE Techniques with Remote Sensing and Meteorological Data
This study presents a novel hybrid model, the Support Vector Autoencoder (SVAE), designed to enhance disaster management through the integration of Support Vector Machines (SVM) and Autoencoders (AE). By leveraging the strengths of both machine learning techniques, the SVAE model offers improved accuracy and reliability in predicting and managing natural disasters. The methodology involves comprehensive data collection from Sentinel-2 satellite imagery and Global Precipitation Measurement (GPM) mission data, supplemented by historical disaster records from the Emergency Events Database (EM-DAT). After rigorous preprocessing, key features such as the Normalized Difference Vegetation Index (NDVI), land surface temperature (LST), soil moisture content, and various meteorological parameters are extracted. These features are then normalized and used to train the SVM for supervised learning and the AE for unsupervised learning. The outputs of these modules are integrated through a fusion layer, which combines classification scores and anomaly detection signals to generate a final risk score. Performance comparison with other models, including Random Forest, k-NN, Decision Tree, and Naive Bayes, demonstrates that the SVAE model achieves superior accuracy, precision, recall, and F1-score. The proposed model’s accuracy reaches 97%, significantly outperforming other techniques in anomaly detection and risk assessment. The results indicate that the SVAE model is a robust tool for enhancing disaster preparedness and mitigation efforts, providing timely and actionable insights to decision-makers.
Catastrophe risk financing in developing countries : principles for public intervention
'Catastrophe Risk Financing in Developing Countries' provides a detailed analysis of the imperfections and inefficiencies that impede the emergence of competitive catastrophe risk markets in developing countries. The book demonstrates how donors and international financial institutions can assist governments in middle- and low-income countries in promoting effective and affordable catastrophe risk financing solutions. The authors present guiding principles on how and when governments, with assistance from donors and international financial institutions, should intervene in catastrophe insurance markets. They also identify key activities to be undertaken by donors and institutions that would allow middle- and low-income countries to develop competitive and cost-effective catastrophe risk financing strategies at both the macro (government) and micro (household) levels. These principles and activities are expected to inform good practices and ensure desirable results in catastrophe insurance projects. 'Catastrophe Risk Financing in Developing Countries' offers valuable advice and guidelines to policy makers and insurance practitioners involved in the development of catastrophe insurance programs in developing countries.
Information visualization : perception for design
Most designers know that yellow text presented against a blue background reads clearly and easily, but how many can explain why, and what really are the best ways to help others and ourselves clearly see key patterns in a bunch of data?.
Handbook of statistical analysis and data mining applications
The Handbook of Statistical Analysis and Data Mining Applications is a comprehensive professional reference book that guides business analysts, scientists, engineers and researchers (both academic and industrial) through all stages of data analysis, model building and implementation. The Handbook helps one discern the technical and business problem, understand the strengths and weaknesses of modern data mining algorithms, and employ the right statistical methods for practical application. Use this book to address massive and complex datasets with novel statistical approaches and be able to objectively evaluate analyses and solutions. It has clear, intuitive explanations of the principles and tools for solving problems using modern analytic techniques, and discusses their application to real problems, in ways accessible and beneficial to practitioners across industries - from science and engineering, to medicine, academia and commerce. This handbook brings together, in a single resource, all the information a beginner will need to understand the tools and issues in data mining to build successful data mining solutions.Written \"By Practitioners for Practitioners\" Non-technical explanations build understanding without jargon and equations Tutorials in numerous fields of study provide step-by-step instruction on how to use supplied tools to build models Practical advice from successful real-world implementations Includes extensive case studies, examples, MS PowerPoint slides and datasets CD-DVD with valuable fully-working  90-day software included:  \"Complete Data Miner - QC-Miner - Text Miner\" bound with book
SAP on Azure Implementation Guide
Learn how to migrate your SAP data to Azure simply and successfully. Key Features * Learn why Azure is suitable for business-critical systems * Understand how to migrate your SAP infrastructure to Azure * Use Lift & shift migration, Lift & migrate, Lift & migrate to HANA, or Lift & transform to S/4HANA Book Description Cloud technologies have now reached a level where even the most critical business systems can run on them. For most organizations SAP is the key business system. If SAP is unavailable for any reason then potentially your business stops. Because of this, it is understandable that you will be concerned whether such a critical system can run in the public cloud. However, the days when you truly ran your IT system on-premises have long since gone. Most organizations have been getting rid of their own data centers and increasingly moving to co-location facilities. In this context the public cloud is nothing more than an additional virtual data center connected to your existing network. There are typically two main reasons why you may consider migrating SAP to Azure: You need to replace the infrastructure that is currently running SAP, or you want to migrate SAP to a new database. Depending on your goal SAP offers different migration paths. You can decide either to migrate the current workload to Azure as-is, or to combine it with changing the database and execute both activities as a single step. SAP on Azure Implementation Guide covers the main migration options to lead you through migrating your SAP data to Azure simply and successfully. What you will learn * Successfully migrate your SAP infrastructure to Azure * Understand the security benefits of Azure * See how Azure can scale to meet the most demanding of business needs * Ensure your SAP infrastructure maintains high availability * Increase business agility through cloud capabilities * Leverage cloud-native capabilities to enhance SAP Who this book is for SAP on Azure Implementation Guide is designed to benefit existing SAP architects looking to migrate their SAP infrastructure to Azure. Whether you are an architect implementing the migration or an IT decision maker evaluating the benefits of migration, this book is for you.
Practical text mining and statistical analysis for non-structured text data applications
Practical Text Mining and Statistical Analysis for Non-structured Text Data Applications brings together all the information, tools and methods a professional will need to efficiently use text mining applications and statistical analysis.Winner of a 2012 PROSE Award in Computing and Information Sciences from the Association of American Publishers, this book presents a comprehensive how-to reference that shows the user how to conduct text mining and statistically analyze results. In addition to providing an in-depth examination of core text mining and link detection tools, methods and operations, the book examines advanced preprocessing techniques, knowledge representation considerations, and visualization approaches. Finally, the book explores current real-world, mission-critical applications of text mining and link detection using real world example tutorials in such varied fields as corporate, finance, business intelligence, genomics research, and counterterrorism activities.The world contains an unimaginably vast amount of digital information which is getting ever vaster ever more rapidly. This makes it possible to do many things that previously could not be done: spot business trends, prevent diseases, combat crime and so on. Managed well, the textual data can be used to unlock new sources of economic value, provide fresh insights into science and hold governments to account. As the Internet expands and our natural capacity to process the unstructured text that it contains diminishes, the value of text mining for information retrieval and search will increase dramatically.Extensive case studies, most in a tutorial format, allow the reader to 'click through' the example using a software program, thus learning to conduct text mining analyses in the most rapid manner of learning possibleNumerous examples, tutorials, power points and datasets available via companion website on Elsevierdirect.comGlossary of text mining terms provided in the appendix
The untold story of missing data in disaster research: a systematic review of the empirical literature utilising the Emergency Events Database (EM-DAT)
Global disaster databases are prone to missing data. Neglect or inappropriate handling of missing data can bias statistical analyses. Consequently, this risks the reliability of study results and the wider evidence base underlying climate and disaster policies. In this paper, a comprehensive systematic literature review was conducted to determine how missing data have been acknowledged and handled in disaster research. We sought empirical, quantitative studies that utilised the Emergency Events Database (EM-DAT) as a primary or secondary data source to capture an extensive sample of the disaster literature. Data on the acknowledgement and handling of missing data were extracted from all eligible studies. Descriptive statistics and univariate correlation analysis were used to identify trends in the consideration of missing data given specific study characteristics. Of the 433 eligible studies, 44.6% acknowledged missing data, albeit briefly, and 33.5% attempted to handle missing data. Studies having a higher page count were significantly (p < 0.01) less prone to acknowledge or handle missing data, whereas the research field of the publication journal distinguished between papers that simply acknowledged missing data, with those that both acknowledged and handled missing data (p < 0.100). A variety of methods to handle missing data (n = 24) were identified. However, these were commonly ad-hoc with little statistical basis. The broad method used to handle missing data: imputation, augmentation or deletion was significantly (p < 0.001) correlated with the geographical scope of the study. This systematic review reveals large failings of the disaster literature to adequately acknowledge and handle missing data. Given these findings, more insight is required to guide a standard practice of handling missing data in disaster research.
Data mining : concepts and techniques
Our ability to generate and collect data has been increasing rapidly. Not only are all of our business, scientific, and government transactions now computerized, but the widespread use of digital cameras, publication tools, and bar codes also generate data. On the collection side, scanned text and image platforms, satellite remote sensing systems, and the World Wide Web have flooded us with a tremendous amount of data. This explosive growth has generated an even more urgent need for new techniques and automated tools that can help us transform this data into useful information and knowledge. Like the first edition, voted the most popular data mining book by KD Nuggets readers, this book explores concepts and techniques for the discovery of patterns hidden in large data sets, focusing on issues relating to their feasibility, usefulness, effectiveness, and scalability. However, since the publication of the first edition, great progress has been made in the development of new data mining methods, systems, and applications. This new edition substantially enhances the first edition, and new chapters have been added to address recent developments on mining complex types of data- including stream data, sequence data, graph structured data, social network data, and multi-relational data.