Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
48 result(s) for "Templ, S"
Sort by:
Measurement of the differential t t ¯ production cross section as a function of the jet mass and extraction of the top quark mass in hadronic decays of boosted top quarks
A measurement of the jet mass distribution in hadronic decays of Lorentz-boosted top quarks is presented. The measurement is performed in the lepton + jets channel of top quark pair production ( ) events, where the lepton is an electron or muon. The products of the hadronic top quark decay are reconstructed using a single large-radius jet with transverse momentum greater than 400 . The data were collected with the CMS detector at the LHC in proton-proton collisions and correspond to an integrated luminosity of 138 . The differential production cross section as a function of the jet mass is unfolded to the particle level and is used to extract the top quark mass. The jet mass scale is calibrated using the hadronic W boson decay within the large-radius jet. The uncertainties in the modelling of the final state radiation are reduced by studying angular correlations in the jet substructure. These developments lead to a significant increase in precision, and a top quark mass of .
Search for exotic decays of the Higgs boson to a pair of pseudoscalars in the μ μ b b and τ τ b b final states
A search for exotic decays of the Higgs boson ( ) with a mass of 125 to a pair of light pseudoscalars is performed in final states where one pseudoscalar decays to two quarks and the other to a pair of muons or leptons. A data sample of proton-proton collisions at corresponding to an integrated luminosity of 138 recorded with the CMS detector is analyzed. No statistically significant excess is observed over the standard model backgrounds. Upper limits are set at 95% confidence level ( ) on the Higgs boson branching fraction to and to via a pair of s. The limits depend on the pseudoscalar mass and are observed to be in the range (0.17-3.3)  and (1.7-7.7)  in the and final states, respectively. In the framework of models with two Higgs doublets and a complex scalar singlet (2HDM+S), the results of the two final states are combined to determine upper limits on the branching fraction at 95% , with being a muon or a lepton. For different types of 2HDM+S, upper bounds on the branching fraction are extracted from the combination of the two channels. In most of the Type II 2HDM+S parameter space, values above 0.23 are excluded at 95% for values between 15 and 60 .
Measurement of the top quark mass using a profile likelihood approach with the lepton + jets final states in proton-proton collisions at s = 13 Te V
The mass of the top quark is measured in 36.3 of LHC proton-proton collision data collected with the CMS detector at . The measurement uses a sample of top quark pair candidate events containing one isolated electron or muon and at least four jets in the final state. For each event, the mass is reconstructed from a kinematic fit of the decay products to a top quark pair hypothesis. A profile likelihood method is applied using up to four observables per event to extract the top quark mass. The top quark mass is measured to be . This approach significantly improves the precision over previous measurements.
Azimuthal correlations in Z +jets events in proton-proton collisions at s = 13 Te V
The production of Z bosons associated with jets is measured in collisions at with data recorded with the CMS experiment at the LHC corresponding to an integrated luminosity of 36.3 . The multiplicity of jets with transverse momentum is measured for different regions of the Z boson's , from lower than 10 to higher than 100 . The azimuthal correlation between the Z boson and the leading jet, as well as the correlations between the two leading jets are measured in three regions of . The measurements are compared with several predictions at leading and next-to-leading orders, interfaced with parton showers. Predictions based on transverse-momentum dependent parton distributions and corresponding parton showers give a good description of the measurement in the regions where multiple parton interactions and higher jet multiplicities are not important. The effects of multiple parton interactions are shown to be important to correctly describe the measured spectra in the low regions.
A search for decays of the Higgs boson to invisible particles in events with a top-antitop quark pair or a vector boson in proton-proton collisions at s = 13 Te V
A search for decays to invisible particles of Higgs bosons produced in association with a top-antitop quark pair or a vector boson, which both decay to a fully hadronic final state, has been performed using proton-proton collision data collected at by the CMS experiment at the LHC, corresponding to an integrated luminosity of 138 . The 95% confidence level upper limit set on the branching fraction of the 125 Higgs boson to invisible particles, , is 0.54 (0.39 expected), assuming standard model production cross sections. The results of this analysis are combined with previous searches carried out at , 8, and 13 in complementary production modes. The combined upper limit at 95% confidence level on is 0.15 (0.08 expected).
Measurement of the mass dependence of the transverse momentum of lepton pairs in Drell-Yan production in proton-proton collisions at s = 13 Te V
The double differential cross sections of the Drell-Yan lepton pair ( , dielectron or dimuon) production are measured as functions of the invariant mass , transverse momentum , and . The observable, derived from angular measurements of the leptons and highly correlated with , is used to probe the low- region in a complementary way. Dilepton masses up to 1 are investigated. Additionally, a measurement is performed requiring at least one jet in the final state. To benefit from partial cancellation of the systematic uncertainty, the ratios of the differential cross sections for various ranges to those in the Z mass peak interval are presented. The collected data correspond to an integrated luminosity of 36.3 of proton-proton collisions recorded with the CMS detector at the LHC at a centre-of-mass energy of 13 . Measurements are compared with predictions based on perturbative quantum chromodynamics, including soft-gluon resummation.
Measurements of the Higgs boson production cross section and couplings in the W boson pair decay channel in proton-proton collisions at s = 13 Te V
Production cross sections of the standard model Higgs boson decaying to a pair of W bosons are measured in proton-proton collisions at a center-of-mass energy of 13 . The analysis targets Higgs bosons produced via gluon fusion, vector boson fusion, and in association with a W or Z boson. Candidate events are required to have at least two charged leptons and moderate missing transverse momentum, targeting events with at least one leptonically decaying W boson originating from the Higgs boson. Results are presented in the form of inclusive and differential cross sections in the simplified template cross section framework, as well as couplings of the Higgs boson to vector bosons and fermions. The data set collected by the CMS detector during 2016-2018 is used, corresponding to an integrated luminosity of 138 . The signal strength modifier , defined as the ratio of the observed production rate in a given decay channel to the standard model expectation, is measured to be . All results are found to be compatible with the standard model within the uncertainties.
Search for light Higgs bosons from supersymmetric cascade decays in pp collisions at s = 13 TeV
A search is reported for pairs of light Higgs bosons ( ) produced in supersymmetric cascade decays in final states with small missing transverse momentum. A data set of LHC collisions collected with the CMS detector at and corresponding to an integrated luminosity of 138 is used. The search targets events where both bosons decay into pairs that are reconstructed as large-radius jets using substructure techniques. No evidence is found for an excess of events beyond the background expectations of the standard model (SM). Results from the search are interpreted in the next-to-minimal supersymmetric extension of the SM, where a \"singlino\" of small mass leads to squark and gluino cascade decays that can predominantly end in a highly Lorentz-boosted singlet-like and a singlino-like neutralino of small transverse momentum. Upper limits are set on the product of the squark or gluino pair production cross section and the square of the branching fraction of the in a benchmark model containing almost mass-degenerate gluinos and light-flavour squarks. Under the assumption of an SM-like branching fraction, bosons with masses in the range 40-120 arising from the decays of squarks or gluinos with a mass of 1200-2500 are excluded at 95% confidence level.
Statistical Analysis of Chemical Element Compositions in Food Science: Problems and Possibilities
In recent years, many analyses have been carried out to investigate the chemical components of food data. However, studies rarely consider the compositional pitfalls of such analyses. This is problematic as it may lead to arbitrary results when non-compositional statistical analysis is applied to compositional datasets. In this study, compositional data analysis (CoDa), which is widely used in other research fields, is compared with classical statistical analysis to demonstrate how the results vary depending on the approach and to show the best possible statistical analysis. For example, honey and saffron are highly susceptible to adulteration and imitation, so the determination of their chemical elements requires the best possible statistical analysis. Our study demonstrated how principle component analysis (PCA) and classification results are influenced by the pre-processing steps conducted on the raw data, and the replacement strategies for missing values and non-detects. Furthermore, it demonstrated the differences in results when compositional and non-compositional methods were applied. Our results suggested that the outcome of the log-ratio analysis provided better separation between the pure and adulterated data and allowed for easier interpretability of the results and a higher accuracy of classification. Similarly, it showed that classification with artificial neural networks (ANNs) works poorly if the CoDa pre-processing steps are left out. From these results, we advise the application of CoDa methods for analyses of the chemical elements of food and for the characterization and authentication of food products.
Enhancing Precision in Large-Scale Data Analysis: An Innovative Robust Imputation Algorithm for Managing Outliers and Missing Values
Navigating the intricate world of data analytics, one method has emerged as a key tool in confronting missing data: multiple imputation. Its strength is further fortified by its powerful variant, robust imputation, which enhances the precision and reliability of its results. In the challenging landscape of data analysis, non-robust methods can be swayed by a few extreme outliers, leading to skewed imputations and biased estimates. This can apply to both representative outliers—those true yet unusual values of your population—and non-representative outliers, which are mere measurement errors. Detecting these outliers in large or high-dimensional data sets often becomes as complex as unraveling a Gordian knot. The solution? Turn to robust imputation methods. Robust (imputation) methods effectively manage outliers and exhibit remarkable resistance to their influence, providing a more reliable approach to dealing with missing data. Moreover, these robust methods offer flexibility, accommodating even if the imputation model used is not a perfect fit. They are akin to a well-designed buffer system, absorbing slight deviations without compromising overall stability. In the latest advancement of statistical methodology, a new robust imputation algorithm has been introduced. This innovative solution addresses three significant challenges with robustness. It utilizes robust bootstrapping to manage model uncertainty during the imputation of a random sample; it incorporates robust fitting to reinforce accuracy; and it takes into account imputation uncertainty in a resilient manner. Furthermore, any complex regression or classification model for any variable with missing data can be run through the algorithm. With this new algorithm, we move one step closer to optimizing the accuracy and reliability of handling missing data. Using a realistic data set and a simulation study including a sensitivity analysis, the new alogorithm imputeRobust shows excellent performance compared with other common methods. Effectiveness was demonstrated by measures of precision for the prediction error, the coverage rates, and the mean square errors of the estimators, as well as by visual comparisons.