Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
Is Full-Text AvailableIs Full-Text Available
-
YearFrom:-To:
-
More FiltersMore FiltersSubjectCountry Of PublicationPublisherSourceLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
99,340
result(s) for
"Data centres"
Sort by:
Borderland circuitry : immigration surveillance in the United States and beyond
by
Muñiz, Ana, 1984- author
in
Immigration enforcement United States 21st century.
,
Immigration enforcement 21st century.
,
Data mining in law enforcement.
2022
\"Political discourse on immigration in the United States has largely focused on what is most visible, including border walls and detention centers, while the invisible information systems that undergird immigration enforcement have garnered less attention. Tracking the evolution of various surveillance-related systems since the 1980s, Borderland Circuitry investigates how the deployment of this information infrastructure has shaped immigration enforcement practices. Ana Muñiz illuminates three phenomena that are becoming increasingly intertwined: digital surveillance, immigration control, and gang enforcement. Using ethnography, interviews, and analysis of documents never before seen, Muñiz uncovers how information-sharing partnerships between local police, state and federal law enforcement, and foreign partners collide to create multiple digital borderlands. Diving deep into a select group of information systems, Borderland Circuitry reveals how those with legal and political power deploy the specter of violent cross-border criminals to justify intensive surveillance, detention, brutality, deportation, and the destruction of land for border militarization\"-- Provided by publisher.
Taiwan’s National Health Insurance Research Database: past and future
by
Lai, Edward Chia-Cheng
,
Shao, Shih-Chieh
,
Yang Kao, Yea-Huei
in
Access control
,
Analysis
,
Artificial intelligence
2019
Taiwan's National Health Insurance Research Database (NHIRD) exemplifies a population-level data source for generating real-world evidence to support clinical decisions and health care policy-making. Like with all claims databases, there have been some validity concerns of studies using the NHIRD, such as the accuracy of diagnosis codes and issues around unmeasured confounders. Endeavors to validate diagnosed codes or to develop methodologic approaches to address unmeasured confounders have largely increased the reliability of NHIRD studies. Recently, Taiwan's Ministry of Health and Welfare (MOHW) established a Health and Welfare Data Center (HWDC), a data repository site that centralizes the NHIRD and about 70 other health-related databases for data management and analyses. To strengthen the protection of data privacy, investigators are required to conduct on-site analysis at an HWDC through remote connection to MOHW servers. Although the tight regulation of this on-site analysis has led to inconvenience for analysts and has increased time and costs required for research, the HWDC has created opportunities for enriched dimensions of study by linking across the NHIRD and other databases. In the near future, researchers will have greater opportunity to distill knowledge from the NHIRD linked to hospital-based electronic medical records databases containing unstructured patient-level information by using artificial intelligence techniques, including machine learning and natural language processes. We believe that NHIRD with multiple data sources could represent a powerful research engine with enriched dimensions and could serve as a guiding light for real-world evidence-based medicine in Taiwan.
Journal Article
The environmental footprint of data centers in the United States
by
Siddik, Md Abu Bakar
,
Marston, Landon
,
Shehabi, Arman
in
carbon footprint
,
Computer centers
,
data center
2021
Much of the world’s data are stored, managed, and distributed by data centers. Data centers require a tremendous amount of energy to operate, accounting for around 1.8% of electricity use in the United States. Large amounts of water are also required to operate data centers, both directly for liquid cooling and indirectly to produce electricity. For the first time, we calculate spatially-detailed carbon and water footprints of data centers operating within the United States, which is home to around one-quarter of all data center servers globally. Our bottom-up approach reveals one-fifth of data center servers direct water footprint comes from moderately to highly water stressed watersheds, while nearly half of servers are fully or partially powered by power plants located within water stressed regions. Approximately 0.5% of total US greenhouse gas emissions are attributed to data centers. We investigate tradeoffs and synergies between data center’s water and energy utilization by strategically locating data centers in areas of the country that will minimize one or more environmental footprints. Our study quantifies the environmental implications behind our data creation and storage and shows a path to decrease the environmental footprint of our increasing digital footprint.
Journal Article
Energy efficiency in cloud computing data centers: a survey on software technologies
by
Dahiya, Susheela
,
Choudhury, Tanupriya
,
Katal, Avita
in
Cloud computing
,
Computer centers
,
Computer Communication Networks
2023
Cloud computing is a commercial and economic paradigm that has gained traction since 2006 and is presently the most significant technology in IT sector. From the notion of cloud computing to its energy efficiency, cloud has been the subject of much discussion. The energy consumption of data centres alone will rise from 200 TWh in 2016 to 2967 TWh in 2030. The data centres require a lot of power to provide services, which increases CO2 emissions. In this survey paper, software-based technologies that can be used for building green data centers and include power management at individual software level has been discussed. The paper discusses the energy efficiency in containers and problem-solving approaches used for reducing power consumption in data centers. Further, the paper also gives details about the impact of data centers on environment that includes the e-waste and the various standards opted by different countries for giving rating to the data centers. This article goes beyond just demonstrating new green cloud computing possibilities. Instead, it focuses the attention and resources of academia and society on a critical issue: long-term technological advancement. The article covers the new technologies that can be applied at the individual software level that includes techniques applied at virtualization level, operating system level and application level. It clearly defines different measures at each level to reduce the energy consumption that clearly adds value to the current environmental problem of pollution reduction. This article also addresses the difficulties, concerns, and needs that cloud data centres and cloud organisations must grasp, as well as some of the factors and case studies that influence green cloud usage.
Journal Article
Optical storage arrays: a perspective for future big data storage
by
Cao, Yaoyu
,
Li, Xiangping
,
Gu, Min
in
639/624/1075/397
,
639/624/400/1021
,
Applied and Technical Physics
2014
The advance of nanophotonics has provided a variety of avenues for light–matter interaction at the nanometer scale through the enriched mechanisms for physical and chemical reactions induced by nanometer-confined optical probes in nanocomposite materials. These emerging nanophotonic devices and materials have enabled researchers to develop disruptive methods of tremendously increasing the storage capacity of current optical memory. In this paper, we present a review of the recent advancements in nanophotonics-enabled optical storage techniques. Particularly, we offer our perspective of using them as optical storage arrays for next-generation exabyte data centers.
Data storage: Nanophotonics promise
The science and technology of nanophotonics can help dramatically increase the capacity of optical discs. After reviewing research into next-generation optical data storage, Min Gu, Xiangping Li and Yaoyu Cao from the Swinburne University of Technology in Australia have offered their perspective of the creation of exabyte-scale optical data centers. They report that developments in ’super-resolution recording‚, which allow a light-sensitive material to be exposed to a focal spot that is smaller than the diffraction limit of light, will allow the size of recorded bits to shrink to just a few nanometres in size. This would ultimately allow a single disk to store petabytes of data and thus constitute a key component in optical storage arrays for ultrahigh-capacity optical data centers.
Journal Article
Machine Learning for Data Center Optimizations: Feature Selection Using Shapley Additive exPlanation (SHAP)
by
De Chiara, Davide
,
Chinnici, Marta
,
Nixon, Sebastian
in
Artificial intelligence
,
Automation
,
Computer centers
2023
The need for artificial intelligence (AI) and machine learning (ML) models to optimize data center (DC) operations increases as the volume of operations management data upsurges tremendously. These strategies can assist operators in better understanding their DC operations and help them make informed decisions upfront to maintain service reliability and availability. The strategies include developing models that optimize energy efficiency, identifying inefficient resource utilization and scheduling policies, and predicting outages. In addition to model hyperparameter tuning, feature subset selection (FSS) is critical for identifying relevant features for effectively modeling DC operations to provide insight into the data, optimize model performance, and reduce computational expenses. Hence, this paper introduces the Shapley Additive exPlanation (SHAP) values method, a class of additive feature attribution values for identifying relevant features that is rarely discussed in the literature. We compared its effectiveness with several commonly used, importance-based feature selection methods. The methods were tested on real DC operations data streams obtained from the ENEA CRESCO6 cluster with 20,832 cores. To demonstrate the effectiveness of SHAP compared to other methods, we selected the top ten most important features from each method, retrained the predictive models, and evaluated their performance using the MAE, RMSE, and MPAE evaluation criteria. The results presented in this paper demonstrate that the predictive models trained using features selected with the SHAP-assisted method performed well, with a lower error and a reasonable execution time compared to other methods.
Journal Article
Data Transmission with Aggregation and Mitigation Model Through Probabilistic Model in Data Center
2024
With the increasing demand for data storage and processing, data centers have become critical infrastructures. Efficient data transmission and aggregation in data centers are essential for improving performance and reducing energy consumption. This research paper presents a novel approach called DAWPM (Data Aggregation Weighted Probabilistic Model) specifically designed for data centers. DAWPM leverages probabilistic models to dynamically adjust data transmission and aggregation strategies based on network conditions, effectively mitigating congestion and improving overall system performance. The proposed model optimizes data aggregation algorithms to reduce the amount of transmitted data while maintaining data accuracy and minimizing the impact on system resources. It employs probabilistic algorithms to analyze data patterns and make informed decisions on data aggregation and transmission. Simulation results demonstrate that DAWPM outperforms existing models in terms of data accuracy, communication overhead, energy consumption, and packet loss rate. The proposed model offers a reliable and efficient solution for data transmission in data centers, enabling improved data processing, reduced network congestion, and enhanced overall system performance.
Journal Article
Spatio-temporal data fusion techniques for modeling digital twin City
2025
The digital twin city technique maps the massive city environmental and social data on a three-dimensional virtual model. It presents the operational status of physical world and supports intelligent city governance. However, the inefficient utilization of distributed data resources, and the lack of sharing and collaboration among multiple departments restrict the data formulation of digital twin city construction. This research proposes a new cross-domain spatio-temporal data fusion framework for supporting complex urban governance. It integrates the heterogeneous urban information generated and stored by different government departments using multiple-information techniques. A specified geographic base reflecting the real city status is established, using geographical entities with unified address as identifiers to encapsulate the urban elements information. We introduce a comprehensive urban spatio-temporal data center construction process, which has already supported multiple urban governance projects. The two distinct advantages in using this data fusion system are: 1) The proposed Bert+PtrNet+ESIM-based address mapping method associates the urban elements information to their corresponding geographic entities with 99.3% F1-Score on real-world dataset. 2) The Wuhan spatio-temporal data center operation illustrates the capability of our framework for complex urban governance, which significantly improves the efficiency of urban management and services. This integrated system engineering provides reference and inspiration for further spatio-temporal data management, which contributes to the future social governance in digital twin city platform.
Journal Article
Application of Data Mining Technology Based on Data Center
2022
Data mining technology refers to the use of mathematics, statistics, computer science and other methods to process a large amount of information to obtain useful conclusions and provide valuable decisions for people. With the rapid development and popularization of the Internet era and the more and more extensive application of computers in various fields, data mining technology has become a hot research field in today’s society. Based on the data center, this paper studies the data mining technology. Firstly, this paper expounds the definition of data mining, and studies the process of data mining and the steps of processing data. Then, this paper also designs and studies the framework of data mining, and tests the performance of the algorithm. Finally, the test results show that data mining technology can well meet the target requirements.
Journal Article