Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
335
result(s) for
"Neufeld, N"
Sort by:
Chemical and biological processes in fluid flows
by
Hernández-García, Emilio
,
Neufeld, Zoltán
in
Applied Mathematics
,
Chemical Engineering
,
Chemical processes
2009,2010
Many chemical and biological processes take place in fluid environments in constant motion — chemical reactions in the atmosphere, biological population dynamics in the ocean, chemical reactors, combustion, and microfluidic devices. Applications of concepts from the field of nonlinear dynamical systems have led to significant progress over the last decade in the theoretical understanding of complex phenomena observed in such systems.
Predicting the risk of cucurbit downy mildew in the eastern United States using an integrated aerobiological model
by
Keever, T
,
Ojiambo, P S
,
Keinath, A P
in
Airborne microorganisms
,
Atmospheric models
,
Disease control
2018
Cucurbit downy mildew caused by the obligate oomycete, Pseudoperonospora cubensis, is considered one of the most economically important diseases of cucurbits worldwide. In the continental United States, the pathogen overwinters in southern Florida and along the coast of the Gulf of Mexico. Outbreaks of the disease in northern states occur annually via long-distance aerial transport of sporangia from infected source fields. An integrated aerobiological modeling system has been developed to predict the risk of disease occurrence and to facilitate timely use of fungicides for disease management. The forecasting system, which combines information on known inoculum sources, long-distance atmospheric spore transport and spore deposition modules, was tested to determine its accuracy in predicting risk of disease outbreak. Rainwater samples at disease monitoring sites in Alabama, Georgia, Louisiana, New York, North Carolina, Ohio, Pennsylvania and South Carolina were collected weekly from planting to the first appearance of symptoms at the field sites during the 2013, 2014, and 2015 growing seasons. A conventional PCR assay with primers specific to P. cubensis was used to detect the presence of sporangia in rain water samples. Disease forecasts were monitored and recorded for each site after each rain event until initial disease symptoms appeared. The pathogen was detected in 38 of the 187 rainwater samples collected during the study period. The forecasting system correctly predicted the risk of disease outbreak based on the presence of sporangia or appearance of initial disease symptoms with an overall accuracy rate of 66 and 75%, respectively. In addition, the probability that the forecasting system correctly classified the presence or absence of disease was ≥ 73%. The true skill statistic calculated based on the appearance of disease symptoms in cucurbit field plantings ranged from 0.42 to 0.58, indicating that the disease forecasting system had an acceptable to good performance in predicting the risk of cucurbit downy mildew outbreak in the eastern United States.
Journal Article
Cross-architecture Kalman filter benchmarks on modern hardware platforms
2018
The 2020 upgrade of the LHCb detector will vastly increase the rate of collisions the online system needs to process in software in order to filter events in real-time. 30 million collisions per second will pass through a selection chain where each step is executed conditional to its prior acceptance. The Kalman filter is a process of the event reconstruction that, due to its time characteristics and early execution in the selection chain, consumes 40% of the whole reconstruction time in the current trigger software. This makes it a time-critical component as the LHCb trigger evolves into a full software trigger in the upgrade. The algorithm Cross Kalman allows performance tests across a variety of architectures, including multi and many-core platforms, and has been successfully integrated and validated in the LHCb codebase. Since its inception, new hardware architectures have become available exposing features that require fine-grained tuning in order to fully utilize their resources. In this paper we present performance benchmarks and explore the Intel® Skylake and Intel® Knights Landing architectures in depth. We determine the performance gain over previous architectures and show that the efficiency of our implementation is close to the maximum attainable given the mathematical formulation of our problem.
Journal Article
The LHCb Data Acquisition and High Level Trigger Processing Architecture
2015
The LHCb experiment at the LHC accelerator at CERN collects collisions of particle bunches at 40 MHz. After a first level of hardware trigger with an output rate of 1 MHz, the physically interesting collisions are selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. This farm consists of up to roughly 25000 CPU cores in roughly 1750 physical nodes each equipped with up to 4 TB local storage space. This work describes the LHCb online system with an emphasis on the developments implemented during the current long shutdown (LS1). We will elaborate the architecture to treble the available CPU power of the HLT farm and the technicalities to determine and verify precise calibration and alignment constants which are fed to the HLT event selection procedure. We will describe how the constants are fed into a two stage HLT event selection facility using extensively the local disk buffering capabilities on the worker nodes. With the installed disk buffers, the CPU resources can be used during periods of up to ten days without beams. These periods in the past accounted to more than 70% of the total time.
Journal Article
The InfiniBand based Event Builder implementation for the LHCb upgrade
2017
The LHCb experiment will undergo a major upgrade during the second long shutdown (2019 - 2020). The upgrade will concern both the detector and the Data Acquisition system, which are to be rebuilt in order to optimally exploit the foreseen higher event rate. The Event Builder is the key component of the DAQ system, for it gathers data from the sub-detectors and builds up the whole event. The Event Builder network has to manage an incoming data rate of 32 Tb/s from a 40 MHz bunch-crossing frequency, with a cardinality of about 500 nodes. In this contribution we present an Event Builder implementation based on the InfiniBand network technology. This software relies on the InfiniBand verbs, which offers a user space interface to employ the Remote Direct Memory Access capabilities provided by the InfiniBand network devices. We will present the performance of the software on a cluster connected with 100 Gb/s InfiniBand network.
Journal Article
Early presentation of type 2 diabetes in Mexican-American youth
1998
Early presentation of type 2 diabetes in Mexican-American youth.
N D Neufeld ,
L J Raffel ,
C Landon ,
Y D Chen and
C M Vadheim
Pediatric Diagnostic Center, Ventura County Medical Center, California, USA.
Abstract
OBJECTIVE: To describe features of pediatric-onset type 2 diabetes in the Hispanic population. RESEARCH DESIGN AND METHODS:
The medical records of 55 Hispanic subjects with diabetes who were treated from 1990 to 1994 in a pediatric clinic serving
lower income Mexican-Americans were reviewed to assess the frequency and clinical features of type 2 diabetes. Additionally,
nondiabetic siblings of several patients underwent oral glucose tolerance testing, and a survey of six high schools in the
same county was performed. RESULTS: Seventeen of 55 (31%) of the diabetic children and adolescents had type 2 diabetes. An
additional 4 Hispanic children with type 2 diabetes treated in other clinics were also identified, yielding a total of 21
subjects who were used to describe the characteristics of childhood type 2 diabetes. At presentation, all were obese (mean
BMI 32.9 +/- 6.2 kg/m2), 62% had no ketonuria, and fasting C-peptide levels were elevated (4.28 +/- 3.43 ng/ml). Diabetes
was easily controlled with diet, sulfonylureas, or low-dose insulin. No autoantibodies were present in those tested, and family
histories were positive for type 2 diabetes. Compliance was poor, and 3 subjects developed diabetic complications. Of the
tested siblings, 2 of 8 had impaired glucose tolerance and 5 of 8 had stimulated hyperinsulinemia, correlated with BMI (r
= 0.80, P < 0.05). The school survey identified 28 diabetic adolescents, 75% more than expected (P < 0.01). The Hispanic enrollment
at each school was highly correlated with the number of diabetic students (r = 0.87, P = 0.011). CONCLUSIONS: Genetic susceptibility
to type 2 diabetes, when coupled with obesity, can produce type 2 diabetes in Mexican-American children. This diagnosis should
be considered in young Hispanic patients, who might otherwise be assumed to have type 1 diabetes, and also when caring for
overweight Hispanic youth with a family history of type 2 diabetes, in whom intervention may prevent or delay diabetes onset.
Journal Article
The LHCb Online system in 2020: trigger-free read-out with (almost exclusively) off-the-shelf hardware
2018
The LHCb experiment at CERN has decided to optimise its physics reach by removing the first level hardware trigger for 2020 and beyond. In addition to requiring fully redesigned front-end electronics this design creates interesting challenges for the data-acquisition and the rest of the online computing system. Such a system can only be realized within realistic cost using as much off-the-shelf hardware as possible. Relevant technologies evolve very quickly and thus the system design is architecture-centred and tries to avoid to depend too much on specific technologies. In this paper we describe the design, the motivations for various choices and the current favoured options for the implementation, and the status of the R&D. We will cover the back-end readout, which contains the only custom-made component, the event-building, the event-filter infrastructure, and storage.
Journal Article
Performance evaluation and capacity planning for a scalable and highly available virtualisation infrastructure for the LHCb experiment
2014
The virtual computing is often run to satisfy different needs: reduce costs, reduce resources, simplify maintenance and the last but not the least adds flexibility. The use of Virtualization in a complex system such as a farm of PCs that control the hardware of an experiment (PLC, power supplies, gas, magnets...) put us in a condition where not only an High Performance requirements need to be carefully considered but also a deep analysis of strategies to achieve a certain level of High Availability. We conducted a performance evaluation on different and comparable storage/network/virtualization platforms. The performance is measured using a series of independent benchmarks, testing the speed and the stability of multiple VMs running heavy-load operations on the I/O of virtualized storage and the virtualized network. The result from the benchmark tests allowed us to study and evaluate how the different VMs workloads interact with the Hardware/Software resource layers.
Journal Article
A PCIe Gen3 based readout for the LHCb upgrade
2014
The architecture of the data acquisition system foreseen for the LHCb upgrade, to be installed by 2018, is devised to readout events trigger-less, synchronously with the LHC bunch crossing rate at 40 MHz. Within this approach the readout boards act as a bridge between the front-end electronics and the High Level Trigger (HLT) computing farm. The baseline design for the LHCb readout is an ATCA board requiring dedicated crates. A local area standard network protocol is implemented in the on-board FPGAs to read out the data. The alternative solution proposed here consists in building the readout boards as PCIe peripherals of the event-builder servers. The main architectural advantage is that protocol and link-technology of the event-builder can be left open until very late, to profit from the most cost-effective industry technology available at the time of the LHC LS2.
Journal Article
Phronesis, a diagnosis and recovery tool for system administrators
2014
The LHCb experiment relies on the Online system, which includes a very large and heterogeneous computing cluster. Ensuring the proper behavior of the different tasks running on the more than 2000 servers represents a huge workload for the small operator team and is a 24/7 task. At CHEP 2012, we presented a prototype of a framework that we designed in order to support the experts. The main objective is to provide them with steadily improving diagnosis and recovery solutions in case of misbehavior of a service, without having to modify the original applications. Our framework is based on adapted principles of the Autonomic Computing model, on Reinforcement Learning algorithms, as well as innovative concepts such as Shared Experience. While the submission at CHEP 2012 showed the validity of our prototype on simulations, we here present an implementation with improved algorithms and manipulation tools, and report on the experience gained with running it in the LHCb Online system.
Journal Article