Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
454 result(s) for "Laycock, P J"
Sort by:
ATLAS data preparation in run 2
In this contribution, the data preparation workflows for Run 2 are presented. The challenges posed by the excellent performance and high live time fraction of the LHC are discussed, and the solutions implemented by ATLAS are described. The prompt calibration loop procedures are described and examples are given. Several levels of data quality assessment are used to quickly spot problems in the control room and prevent data loss, and to provide the final selection used for physics analysis. Finally the data quality efficiency for physics analysis is shown.
A Conditions Data Management System for HEP Experiments
Conditions data infrastructure for both ATLAS and CMS have to deal with the management of several terabytes of data [1, 2]. Distributed computing access to this data requires particular care and attention to manage request-rates of up to several tens of kHz. Thanks to the large overlap in use cases and requirements, ATLAS and CMS have worked towards a common solution for conditions data management with the aim of using this design for data-taking in Run 3. In the meantime other experiments, including NA62, have expressed an interest in this cross-experiment initiative. For experiments with a smaller payload volume and complexity, there is particular interest in having simple payload storage. The conditions data management model is implemented in a small set of relational database tables. A prototype access toolkit consisting of an intermediate web server has been implemented, using standard technologies available in the Java community. Access is provided through a set of REST services for which the API has been described in a generic way using standard Open API specifications, implemented in Swagger. Such a solution allows the automatic generation of client code and server stubs and further allows changes in the backend technology transparently. An important advantage of using a REST API for conditions access is the possibility of caching identical URLs, addressing one of the biggest challenges that large distributed computing solutions impose on conditions data access, avoiding direct DB access by means of standard web proxy solutions.
Derived Physics Data Production in ATLAS: Experience with Run 1 and Looking Ahead
While a significant fraction of ATLAS physicists directly analyse the AOD (Analysis Object Data) produced at the CERN Tier 0, a much larger fraction have opted to analyse data in a flat ROOT format. The large scale production of this Derived Physics Data (DPD) format must cater for both detailed performance studies of the ATLAS detector and object reconstruction, as well as higher level and generally lighter-content physics analysis. The delay between data-taking and DPD production allows for software improvements, while the ease of arbitrarily defined skimming/slimming of this format results in an optimally performant format for end-user analysis. Given the diversity of requirements, there are many flavours of DPDs, which can result in large peak computing resource demands. While the current model has proven to be very flexible for the individual groups and has successfully met the needs of the collaboration, the resource requirements at the end of Run 1 were much larger than planned. In the near future, ATLAS plans to consolidate DPD production, optimising resource usage vs flexibility such that the final analysis format will be more homogeneous across ATLAS while still keeping most of the advantages enjoyed during Run 1. The ATLAS Run 1 DPD Production Model is presented along with an overview of the resource usage at the end of Run 1, followed by an outlook for future plans.
The Urban Heat Island in Manchester 1996–2011
The urban heat island intensity (difference between a semi-rural and urban dry bulb air temperature, urban heat island intensity) has been analysed for Manchester using data from 1996 to 2011. The semi-rural sites were airfields and the urban site was 2 km from the centre of Manchester. Although the urban site was not as developed as the city centre it showed a significant urban heat island intensity. A stochastic model developed from the data showed that the maximum mean daily value would be about 6℃ which compared well with more detailed measurements. However, there was a highly significant trend of increasing urban heat island intensity which by the end of the century could add 2.4 K to the predicted climate change increase. An analysis of the urban morphology showed that the urban site had indeed become more urban over 9 years of the study, losing green spaces which mitigate against the urban heat island intensity. Practical application: The results from this paper will allow building and HVAC designers to consider the increase in the urban heat island in their designs when using future weather data. Although the results are for Manchester, similar trends may well apply to other similar-sized cities. Designers should consider the future weather data available, as their buildings will last for a considerable time so they should be as future-proofed as possible.
A Method for Generating and Labelling All Regular Fractions or Blocks for qn-m-designs
By exploiting the vector space structure of regular qn-m designs we construct and demonstrate a method for generating all such designs, without repetition, along with their confounded interactions, resolution numbers, alias sets and a decodable design number.
Extreme and near-extreme climate change data in relation to building and plant design
Buildings and plant are designed utilizing near-extreme weather data. The present data used are brie‘ y discussed, including manual near-extreme percentiles for manual design and hourly data for simulation on a PC (test reference years and design summer years, and near-extreme periods). However, with climate change occurring, designs based on current data will produce uncomfortable summer thermal conditions within and around buildings in the future. This expected change is especially relevant now, as buildings have to last typically from 50 to 100 years. Climate change data for the future are needed to assess the performance of buildings and plant in the future. The Hadley Centre climate change models could provide such data. In this paper analysis of extreme data from one model, the HadCM3 model (south-east England grid box) with an appropriate climate change scenario, are considered in relation to their use for design assessment. Dry bulb temperature and solar irradiance extreme values are considered in this paper. The expected trend in both minimum and maximum temperature is for both to increase with time, but the maxima are found to rise faster than the minima. There are two factors in‘ uencing the solar radiation estimates, the basic clarity of the atmosphere and the seasonal amount of cloud. The latter is predicted to increase slightly in winter and decrease slightly in summer. The variations in the predicted short-wave radiation values re‘ ect the expected combined impacts of these two factors. The implications of these results are brie‘ y discussed.
On a Problem of Repeated Measurement Design with Treatment Additivity
We consider an experimental design problem in which n treatments are applied successively to each experimental unit, and once applied their effects are permanent. To examine all 2n- 1 treatments combinations, a minimum of$\\binom{n}{\\big\\lbrack \\frac{n}{2} \\big\\rbrack}$experimental units is both required and sufficient. A linear model is described and the first nontrivial case, n = 4, is examined in detail. It is shown that there are 24 nonisomorphic designs which reduce to 13 under the assumption of no interaction between the treatments. A serial correlation model is considered and the D, A and E, optimality criteria evaluated for ρ = 0, 0.5 and 0.75. Possible uses for the design automorphisms are then considered.
Convex Loss Applied to Design in Regression Problems
A general linear regression function is to be observed at n points in order to estimate a known linear combination of the unknown parameters. The n points and the estimator are to be optimum in some sense and in this paper the main criterion for optimality involves uniformly minimizing certain convex loss functions. Three main sets of results are obtained, followed by some further results for normally distributed errors.
The H1 Forward Track Detector at HERA II
In order to maintain efficient tracking in the forward region of H1 after the luminosity upgrade of the HERA machine, the H1 Forward Track Detector was also upgraded. While much of the original software and techniques used for the HERA I phase could be reused, the software for pattern recognition was completely rewritten. This, along with several other improvements in hit finding and high-level track reconstruction, are described in detail together with a summary of the performance of the detector.