Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
153,602
result(s) for
"Computer performance"
Sort by:
Aircraft control and simulation : dynamics, controls design, and autonomous systems
by
Stevens, Brian L., 1939- author
,
Lewis, Frank L., author
,
Johnson, Eric N., 1970- author
in
Aerodynamics Mathematics.
,
Flight control Computer simulation.
,
Airplanes Performance Mathematical models.
2016
This third edition is a comprehensive guide to aircraft control and simulation. The updated text covers flight control systems, flight dynamics, aircraft modelling, and flight simulation from both classical design and modern perspectives, as well as two new chapters on the modelling, simulation, and adaptive control of unmanned aerial vehicles.
Potential for waste heat utilization of hot‐water‐cooled data centers: A case study
by
Stephan, Peter
,
Sauerwein, David
,
Dammel, Frank
in
Air flow
,
Carbon dioxide
,
Carbon dioxide emissions
2020
The electric energy demand of data centers in Germany has grown rapidly from 10.5 TWh/a in 2010 to 13.2 TWh/a in 2017, an average of 25% of which are used to fulfill the data centers' cooling demand. In order to increase its energy efficiency, TU Darmstadt applies a new cooling concept in the next generation of its high‐performance computing data center “Lichtenberg II.” Instead of the current air‐cooled servers with water‐cooled rear doors at 17‐24°C, the new data center will be equipped with direct hot‐water cooling for the high‐performance computer, supplying heat at a temperature of 45°C. The high‐temperature waste heat is used for heating purposes on the university's campus Lichtwiese. For waste heat utilization, two concepts are presented, either integrating the heat in the return line of the district heating network or using it locally in buildings located near the data center. Reductions in CO2 emission and annuity are generated both by decreased compression cooling demand for the data center and by decreased heat generation due to waste heat utilization. Depending on the scenario, a total of 20%‐50% of the waste heat emitted by the high‐performance computer can be used for heating purposes, while the remaining heat is dissipated efficiently via free cooling without additional energy demand for mechanical chillers. CO2 emission can be decreased by up to 720 tCO2/a, representing a reduction of about 4% of the total emission at campus Lichtwiese. TU Darmstadt is currently implementing the waste heat integration into its district heating network and will benefit from this concept starting in 2020. Although a lot of data centers are located in the vicinity of other buildings with heat demand, most of the emitted waste heat goes unused because its temperature is too low and the operators of the data center and adjacent buildings are not interested in a collaboration. This article shows that connecting water‐cooled servers (supplying an output temperature of 45°C) with the utilization of the waste heat for building heating purposes reduces the data center's demand for cooling energy as well as the building demand for heat from other sources and can therefore be beneficial for both parties.
Journal Article
Business models in the software industry : the impact on firm and M&A performance
The relevance of software business models has tremendously increased in recent years. Markus Schief explores opportunities to improve the management of these models. Based on a conceptual framework of software business model characteristics, he conducts large empirical studies to examine the current state of business models in the software industry.
A high-performance onboard computing architecture for autonomous satellite mission planning
2026
With the increasing demand for onboard autonomy in remote sensing and space missions, traditional ground-centered mission planning architectures face limitations in responsiveness and operational flexibility. To support onboard autonomous mission planning and data processing, this paper presents the engineering design and system-level realization of a high-performance Mission Planning Board(MPB) for satellite applications.The proposed MPB adopts a modular single-board hardware architecture and an extensible software framework, enabling the deployment and reconfiguration of mission planning, data processing, and health management applications on orbit. The hardware integrates a radiation-tolerant high-performance CPU, interface FPGA, and intelligent acceleration module, while the software architecture supports task scheduling, system monitoring, and reliable in-orbit operation. Comprehensive reliability measures, including redundancy design, fault tolerance mechanisms, and environmental adaptability, are incorporated to ensure suitability for space environments. Ground-based functional tests and environmental qualification experiments have been conducted to verify the correctness and robustness of the proposed design. In addition, the MPB has been deployed on orbit for engineering and system-level validation, demonstrating stable operation and functional feasibility. The results indicate that the proposed architecture provides a practical and extensible platform for enhancing onboard computing capability and supporting onboard mission planning–related computing and system-level autonomy in future satellite missions. This work focuses on engineering implementation and validation, rather than quantitative evaluation of specific mission planning or intelligent algorithms.
Journal Article
Digital bodies : creativity and technology in the arts and humanities
This book explores technologies related to bodily interaction and creativity from a multi-disciplinary perspective. By taking such an approach, the collection offers a comprehensive view of digital technology research that both extends our notions of the body and creativity through a digital lens, and informs of the role of technology in practices central to the arts and humanities. Crucially, Digital Bodies foregrounds creativity, the interrogation of technologies and the notion of embodiment within the various disciplines of art, design, performance and social science. In doing so, it explores a potential or virtual new sense of the embodied self. This book will appeal to academics, practitioners and those with an interest in not only how digital technologies affect the body, but also how they can enhance human creativity.
Performance Comparison of CFD Microbenchmarks on Diverse HPC Architectures
2024
OpenFOAM is a CFD software widely used in both industry and academia. The exaFOAM project aims at enhancing the HPC scalability of OpenFOAM, while identifying its current bottlenecks and proposing ways to overcome them. For the assessment of the software components and the code profiling during the code development, lightweight but significant benchmarks should be used. The answer was to develop microbenchmarks, with a small memory footprint and short runtime. The name microbenchmark does not mean that they have been prepared to be the smallest possible test cases, as they have been developed to fit in a compute node, which usually has dozens of compute cores. The microbenchmarks cover a broad band of applications: incompressible and compressible flow, combustion, viscoelastic flow and adjoint optimization. All benchmarks are part of the OpenFOAM HPC Technical Committee repository and are fully accessible. The performance using HPC systems with Intel and AMD processors (x86_64 architecture) and Arm processors (aarch64 architecture) have been benchmarked. For the workloads in this study, the mean performance with the AMD CPU is 62% higher than with Arm and 42% higher than with Intel. The AMD processor seems particularly suited resulting in an overall shorter time-to-solution.
Journal Article
A Survey of High-Performance Interconnection Networks in High-Performance Computer Systems
by
Lu, Ping-Jing
,
Chang, Jun-Sheng
,
Lai, Ming-Che
in
Artificial intelligence
,
Bandwidths
,
Big Data
2022
High-performance interconnection network is the key to realizing high-speed, collaborative, parallel computing at each node in a high-performance computer system. Its performance and scalability directly affect the performance and scalability of the whole system. With continuous improvements in the performance of high-performance computer systems, the trend in the development of high-performance interconnection networks is mainly reflected in network sizes and network bandwidths. With the slowdown of Moore’s Law, it is necessary to adopt new packaging design technologies to implement high-performance interconnection networks for high-performance computing. This article analyzes the main interconnection networks used by high-performance computer systems in the Top500 list of November 2021, and it elaborates the design of representative, state-of-the-art, high-performance interconnection networks, including NVIDIA InfiniBand, Intel Omni-Path, Cray Slingshot/Aries, and custom or proprietary networks, including Fugaku Tofu, Bull BXI, TH Express, and so forth. This article also comprehensively discusses the latest technologies and trends in this field. In addition, based on the analysis of the challenges faced by high-performance interconnection network design in the post-Moore era and the exascale computing era, this article presents a perspective on high-performance interconnection networks.
Journal Article
Running applications on Oracle Exadata : tuning tips & techniques
\"Best practices for peak OLTP performance\"--Page 1 of cover.
The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection
by
Steger Carsten
,
Fauser, Michael
,
Batzner Kilian
in
Annotations
,
Anomalies
,
Artificial neural networks
2021
The detection of anomalous structures in natural image data is of utmost importance for numerous tasks in the field of computer vision. The development of methods for unsupervised anomaly detection requires data on which to train and evaluate new approaches and ideas. We introduce the MVTec anomaly detection dataset containing 5354 high-resolution color images of different object and texture categories. It contains normal, i.e., defect-free images intended for training and images with anomalies intended for testing. The anomalies manifest themselves in the form of over 70 different types of defects such as scratches, dents, contaminations, and various structural changes. In addition, we provide pixel-precise ground truth annotations for all anomalies. We conduct a thorough evaluation of current state-of-the-art unsupervised anomaly detection methods based on deep architectures such as convolutional autoencoders, generative adversarial networks, and feature descriptors using pretrained convolutional neural networks, as well as classical computer vision methods. We highlight the advantages and disadvantages of multiple performance metrics as well as threshold estimation techniques. This benchmark indicates that methods that leverage descriptors of pretrained networks outperform all other approaches and deep-learning-based generative models show considerable room for improvement.
Journal Article