Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
35,726
result(s) for
"high performance computing"
Sort by:
Enabling High‐Performance Cloud Computing for Earth Science Modeling on Over a Thousand Cores: Application to the GEOS‐Chem Atmospheric Chemistry Model
by
Yantosca, Robert M.
,
Eastham, Sebastian D.
,
Lundgren, Elizabeth W.
in
Atmospheric chemistry
,
Big Data
,
Chemical transport
2020
Cloud computing platforms can facilitate the use of Earth science models by providing immediate access to fully configured software, massive computing power, and large input data sets. However, slow internode communication performance has previously discouraged the use of cloud platforms for massively parallel simulations. Here we show that recent advances in the network performance on the Amazon Web Services cloud enable efficient model simulations with over a thousand cores. The choices of Message Passing Interface library configuration and internode communication protocol are critical to this success. Application to the Goddard Earth Observing System (GEOS)‐Chem global 3‐D chemical transport model at 50‐km horizontal resolution shows efficient scaling up to at least 1,152 cores, with performance and cost comparable to the National Aeronautics and Space Administration Pleiades supercomputing cluster. Plain Language Summary Earth science model simulations are computationally expensive, typically requiring the use of high‐end supercomputing clusters that are managed by universities or national laboratories. Commercial cloud computing offers an alternative. However, past work found that cloud computing platforms were not efficient for large‐scale simulations on over 100 CPU cores, because the network communication performance on the cloud was slow compared to local clusters. Here we show that recent advances in the cloud network performance enable efficient model simulations with over a thousand cores, and cloud platforms can now serve as a viable alternative to local clusters for simulations at large scale. Computing on the cloud has extensive advantages, such as providing immediate access to fully configured model code and large data sets for any users, allowing full reproducibility of model simulation results, offering quick access to novel hardware that might not be available on local clusters, and being able to scale to virtually unlimited amounts of compute and storage resources. Those benefits will help advance Earth science modeling research. Key Points Recent advances in network performance enable efficient model simulations with over a thousand cores on the Amazon Web Services (AWS) cloud Performance and cost of cloud can be comparable to local supercomputing cluster The GEOS‐Chem chemical transport model is now available and documented for massively parallel simulations on the AWS cloud
Journal Article
Running applications on Oracle Exadata : tuning tips & techniques
\"Best practices for peak OLTP performance\"--Page 1 of cover.
ModelTest-NG: A New and Scalable Tool for the Selection of DNA and Protein Evolutionary Models
by
Flouri, Tomas
,
Posada, David
,
Stamatakis, Alexandros
in
Accuracy
,
Algorithms
,
Amino acid substitution
2020
ModelTest-NG is a reimplementation from scratch of jModelTest and ProtTest, two popular tools for selecting the best-fit nucleotide and amino acid substitution models, respectively. ModelTest-NG is one to two orders of magnitude faster than jModelTest and ProtTest but equally accurate and introduces several new features, such as ascertainment bias correction, mixture, and free-rate models, or the automatic processing of single partitions. ModelTest-NG is available under a GNU GPL3 license at https://github.com/ddarriba/modeltest, last accessed September 2, 2019.
Journal Article
UKCropDiversity‐HPC: A collaborative high‐performance computing resource approach for sustainable agriculture and biodiversity conservation
2025
Societal Impact Statement Diverse gene pools are fundamental to crop improvement, biodiversity maintenance and environmental management. The UKCropDiversity‐HPC high‐performance computing resource enables seven UK institutes to perform plant and conservation research with increased efficiency, cost‐effectiveness and environmental sustainability. It supports research across numerous areas, including bioinformatics, genetics, phenomics and conservation ‐ including Artificial Intelligence approaches. Its utilisation supports many United Nations Sustainable Development Goals, including Goals‐2 (Zero Hunger), −13 (Climate Action), −15 (Life on Land), −9 (Industry, Innovation and Infrastructure) and −4 (Quality Education). Accordingly, UKCropDiversity‐HPC helps maximise the societal impact of research undertaken at our seven institutes, driving positive change for future generations. Diverse gene pools are fundamental to crop improvement, biodiversity maintenance and environmental management. The UKCropDiversity‐HPC high‐performance computing resource enables seven UK institutes to perform plant and conservation research with increased efficiency, cost‐effectiveness and environmental sustainability. It supports research across numerous areas, including bioinformatics, genetics, phenomics and conservation ‐ including Artificial Intelligence approaches. Its utilisation supports many United Nations Sustainable Development Goals, including Goals‐2 (Zero Hunger), −13 (Climate Action), −15 (Life on Land), −9 (Industry, Innovation and Infrastructure) and −4 (Quality Education). Accordingly, UKCropDiversity‐HPC helps maximise the societal impact of research undertaken at our seven institutes, driving positive change for future generations.
Journal Article
Energy-Aware Scheduling for High-Performance Computing Systems: A Survey
by
Kocot, Bartłomiej
,
Czarnul, Paweł
,
Proficz, Jerzy
in
Algorithms
,
Computer centers
,
Cost control
2023
High-performance computing (HPC), according to its name, is traditionally oriented toward performance, especially the execution time and scalability of the computations. However, due to the high cost and environmental issues, energy consumption has already become a very important factor that needs to be considered. The paper presents a survey of energy-aware scheduling methods used in a modern HPC environment, starting with the problem definition, tackling various goals set up for this challenge, including a bi-objective approach, power and energy constraints, and a pure energy solution, as well as metrics related to the subject. Then, considered types of HPC systems and related energy-saving mechanisms are described, from multicore-processors/graphical processing units (GPU) to more complex solutions, such as compute clusters supporting dynamic voltage and frequency scaling (DVFS), power capping, and other functionalities. The main section presents a collection of carefully selected algorithms, classified by the programming method, e.g., machine learning or fuzzy logic. Moreover, other surveys published on this subject are summarized and commented on, and finally, an overview of the current state-of-the-art with open problems and further research areas is presented.
Journal Article