Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
88 result(s) for "Segoni, I"
Sort by:
CMS data quality monitoring: Systems and experiences
In the last two years the CMS experiment has commissioned a full end to end data quality monitoring system in tandem with progress in the detector commissioning. We present the data quality monitoring and certification systems in place, from online data taking to delivering certified data sets for physics analyses, release validation and offline re-reconstruction activities at Tier-1s. We discuss the main results and lessons learnt so far in the commissioning and early detector operation. We outline our practical operations arrangements and the key technical implementation aspects.
CMS event display and data quality monitoring at LHC start-up
The event display and data quality monitoring visualisation systems are especially crucial for commissioning CMS in the imminent CMS physics run at the LHC. They have already proved invaluable for the CMS magnet test and cosmic challenge. We describe how these systems are used to navigate and filter the immense amounts of complex event data from the CMS detector and prepare clear and flexible views of the salient features to the shift crews and offline users. These allow shift staff and experts to navigate from a top-level general view to very specific monitoring elements in real time to help validate data quality and ascertain causes of problems. We describe how events may be accessed in the higher level trigger filter farm, at the CERN Tier-0 centre, and in offsite centres to help ensure good data quality at all points in the data processing workflow. Emphasis has been placed on deployment issues in order to ensure that experts and general users may use the visualization systems at CERN, in remote operations and monitoring centres offsite, and from their own desktops.
Gas Analysis and Monitoring Systems for the RPC Detector of CMS at LHC
The Resistive Plate Chambers (RPC) detector of the CMS experiment at the LHC proton collider (CERN, Switzerland) will employ an online gas analysis and monitoring system of the freon-based gas mixture used. We give an overview of the CMS RPC gas system, describe the project parameters and first results on gas-chromatograph analysis. Finally, we report on preliminary results for a set of monitor RPC.
A review of the recent literature on rainfall thresholds for landslide occurrence
The topic of rainfall thresholds for landslide occurrence was thoroughly investigated, producing abundance of case studies at different scales of analysis and several technical and scientific advances. We reviewed the most recent papers published in scientific journals, highlighting significant advances and critical issues. We collected and grouped all the information on rainfall thresholds into four categories: publication details, geographical distribution and uses, dataset features, threshold definition. In each category, we selected descriptive information to characterize each one of the 115 rainfall threshold published in the last 9 years. The main improvements that stood out from the review are the definition of standard procedures for the identification of rainfall events and for the objective definition of the thresholds. Numerous advances were achieved in the cataloguing of landslides too, which can be defined as one of the most important variables, together with rainfall data, for drawing reliable thresholds. Another focal point of the reviewed articles was the increased definition of thresholds with different exceedance probabilities to be employed for the definition of warning levels in landslide early warning systems. Nevertheless, drawbacks and criticisms can be identified in most part of the recent literature on rainfall thresholds. The main issues concern the validation process, which is seldom carried out, and the very frequent lack of explanations for the rain gauge selection procedure. The paper may be used as a guide to find adequate literature on the most used or the most advanced approaches followed in every step of the procedure for defining reliable rainfall thresholds. Therefore, it constitutes a guideline for future studies and applications, in particular in early warning systems. The paper also aims at addressing the gaps that need to be filled to further enhance the quality of the research products in this field. The contribution of this manuscript could be seen not only as a review of the state of the art, but also an effective method to disseminate the best practices among scientists and stakeholders involved in landslide hazard management.
Landslide susceptibility estimation by random forests technique: sensitivity and scaling issues
Despite the large number of recent advances and developments in landslide susceptibility mapping (LSM) there is still a lack of studies focusing on specific aspects of LSM model sensitivity. For example, the influence of factors such as the survey scale of the landslide conditioning variables (LCVs), the resolution of the mapping unit (MUR) and the optimal number and ranking of LCVs have never been investigated analytically, especially on large data sets. In this paper we attempt this experimentation concentrating on the impact of model tuning choice on the final result, rather than on the comparison of methodologies. To this end, we adopt a simple implementation of the random forest (RF), a machine learning technique, to produce an ensemble of landslide susceptibility maps for a set of different model settings, input data types and scales. Random forest is a combination of Bayesian trees that relates a set of predictors to the actual landslide occurrence. Being it a nonparametric model, it is possible to incorporate a range of numerical or categorical data layers and there is no need to select unimodal training data as for example in linear discriminant analysis. Many widely acknowledged landslide predisposing factors are taken into account as mainly related to the lithology, the land use, the geomorphology, the structural and anthropogenic constraints. In addition, for each factor we also include in the predictors set a measure of the standard deviation (for numerical variables) or the variety (for categorical ones) over the map unit. As in other systems, the use of RF enables one to estimate the relative importance of the single input parameters and to select the optimal configuration of the classification model. The model is initially applied using the complete set of input variables, then an iterative process is implemented and progressively smaller subsets of the parameter space are considered. The impact of scale and accuracy of input variables, as well as the effect of the random component of the RF model on the susceptibility results, are also examined. The model is tested in the Arno River basin (central Italy). We find that the dimension of parameter space, the mapping unit (scale) and the training process strongly influence the classification accuracy and the prediction process. This, in turn, implies that a careful sensitivity analysis making use of traditional and new tools should always be performed before producing final susceptibility maps at all levels and scales.
Who Wants to Be a Geomorphologist? Gamification in a BSc Teaching Course
Despite the importance of Earth sciences in addressing the global challenges that humanity is presently facing, attention toward related disciplines has been witnessed to be globally declining at various levels, including education and university teaching. To increase students’ engagement and explore alternative teaching activities, a didactical experiment was carried out at the University of Florence (Italy); the teaching course, “basic elements of geomorphology”, was reorganized to include relevant elements of gamification. Parallel to the frontal lessons, a competition based on a recurring quiz game was conducted. This activity was called “Who wants to be a Geomorphologist?”, clearly paraphrasing a notorious TV show. During every lesson, a moment was included where the students used their mobile devices to access a series of quizzes that were previously prepared by the teacher to test the reasoning skills of the students and their abilities to make connections between distinct topics. A commercial educational app was used to organize the activity, run the quiz sessions, assign points, and update the leaderboard in real time. A quantitative evaluation procedure assessed the positive impacts in terms of supporting the learning process, improving the engagement in the teaching course, and fostering the liking for geomorphology.
Optimization of SVR and CatBoost models using metaheuristic algorithms to assess landslide susceptibility
In this study, a landslide susceptibility assessment is performed by combining two machine learning regression algorithms (MLRA), such as support vector regression (SVR) and categorical boosting (CatBoost), with two population-based optimization algorithms, such as grey wolf optimizer (GWO) and particle swarm optimization (PSO), to evaluate the potential of a relatively new algorithm and the impact that optimization algorithms can have on the performance of regression models. The Kerala state in India has been chosen as the test site due to the large number of recorded incidents in the recent past. The study started with 18 potential predisposing factors, which were reduced to 14 after a multi-approach feature selection technique. Six susceptibility models were implemented and compared using the machine learning algorithms alone and combining each of them with the two optimization algorithms: SVR, CatBoost, SVR-PSO, CatBoost-PSO, SVR-GWO, and CatBoost-GWO. The resulting maps were validated with an independent dataset. The performance rankings, based on the area under the receiver operating characteristic curve (AUC) metric, are as follows: CatBoost-GWO (AUC = 0.910) had the highest performance, followed by CatBoost-PSO (AUC = 0.909), CatBoost (AUC = 0.899), SVR-GWO (AUC = 0.868), SVR-PSO (AUC = 0.858), and SVR (AUC = 0.840). Other validation statistics corroborated these outcomes, and the Friedman and Wilcoxon-signed rank tests verified the statistical significance of the models. Our case study showed that CatBoost outperformed SVR both in case the models were optimized or not; the introduction of optimization algorithms significantly improves the results of machine learning models, with GWO being slightly more effective than PSO. However, optimization cannot drastically alter the results of the model, highlighting the importance of setting up of a rigorous susceptibility model since the early steps of any research.
Landslide susceptibility assessment in complex geological settings: sensitivity to geological information and insights on its parameterization
The literature about landslide susceptibility mapping is rich of works focusing on improving or comparing the algorithms used for the modeling, but to our knowledge, a sensitivity analysis on the use of geological information has never been performed, and a standard method to input geological maps into susceptibility assessments has never been established. This point is crucial, especially when working on wide and complex areas, in which a detailed geological map needs to be reclassified according to more general criteria. In a study area in Italy, we tested different configurations of a random forest–based landslide susceptibility model, accounting for geological information with the use of lithologic, chronologic, structural, paleogeographic, and genetic units. Different susceptibility maps were obtained, and a validation procedure based on AUC (area under receiver-operator characteristic curve) and OOBE (out of bag error) allowed us to get to some conclusions that could be of help for in future landslide susceptibility assessments. Different parameters can be derived from a detailed geological map by aggregating the mapped elements into broader units, and the results of the susceptibility assessment are very sensitive to these geology-derived parameters; thus, it is of paramount importance to understand properly the nature and the meaning of the information provided by geology-related maps before using them in susceptibility assessment. Regarding the model configurations making use of only one parameter, the best results were obtained using the genetic approach, while lithology, which is commonly used in the current literature, was ranked only second. However, in our case study, the best prediction was obtained when all the geological parameters were used together. Geological maps provide a very complex and multifaceted information; in wide and complex area, this information cannot be represented by a single parameter: more geology-based parameters can perform better than one, because each of them can account for specific features connected to landslide predisposition.
Root Reinforcement in Slope Stability Models: A Review
The influence of vegetation on mechanical and hydrological soil behavior represents a significant factor to be considered in shallow landslides modelling. Among the multiple effects exerted by vegetation, root reinforcement is widely recognized as one of the most relevant for slope stability. Lately, the literature has been greatly enriched by novel research on this phenomenon. To investigate which aspects have been most treated, which results have been obtained and which aspects require further attention, we reviewed papers published during the period of 2015–2020 dealing with root reinforcement. This paper—after introducing main effects of vegetation on slope stability, recalling studies of reference—provides a synthesis of the main contributions to the subtopics: (i) approaches for estimating root reinforcement distribution at a regional scale; (ii) new slope stability models, including root reinforcement and (iii) the influence of particular plant species, forest management, forest structure, wildfires and soil moisture gradient on root reinforcement. Including root reinforcement in slope stability analysis has resulted a topic receiving growing attention, particularly in Europe; in addition, research interests are also emerging in Asia. Despite recent advances, including root reinforcement into regional models still represents a research challenge, because of its high spatial and temporal variability: only a few applications are reported about areas of hundreds of square kilometers. The most promising and necessary future research directions include the study of soil moisture gradient and wildfire controls on the root strength, as these aspects have not been fully integrated into slope stability modelling.
A step beyond landslide susceptibility maps: a simple method to investigate and explain the different outcomes obtained by different approaches
Landslide susceptibility assessment is vital for landslide risk management and urban planning, and the scientific community is continuously proposing new approaches to map landslide susceptibility, especially by hybridizing state-of-the-art models and by proposing new ones. A common practice in landslide susceptibility studies is to compare (two or more) different models in terms of AUC (area under ROC curve) to assess which one has the best predictive performance. The objective of this paper is to show that the classical scheme of comparison between susceptibility models can be expanded and enriched with substantial geomorphological insights by focusing the comparison on the mapped susceptibility values and investigating the geomorphological reasons of the differences encountered. To this aim, we used four susceptibility maps of the Wanzhou County (China) obtained with four different classification methods (namely, random forest, index of entropy, frequency ratio, and certainty factor). A quantitative comparison of the susceptibility values was carried out on a pixel-by-pixel basis, to reveal systematic spatial patterns in the differences among susceptibility maps; then, those patterns were put in relation with all the explanatory variables used in the susceptibility assessments. The lithological and morphological features of the study area that are typically associated to underestimations and overestimations of susceptibility were identified. The results shed a new light on the susceptibility models, identifying systematic errors that could be probably associated either to shortcomings of the models or to distinctive morphological features of the test site, such as nearly flat low altitude areas near the main rivers, and some lithological units.