Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
8,062 result(s) for "Utility computing"
Sort by:
Information Superbahn: Towards a Planet-Scale, Low-Entropy and High-Goodput Computing Utility
In a 1961 lecture to celebrate MIT’s centennial, John McCarthy proposed the vision of utility computing, including three key concepts of pay-per-use service, large computer and private computer. Six decades have passed, but Mc- Carthy’s computing utility vision has not yet been fully realized, despite advances in grid computing, services computing and cloud computing. This paper presents a perspective of computing utility called Information Superbahn, building on recent advances in cloud computing. This Information Superbahn perspective retains McCarthy’s vision as much as possible, while making essential modern requirements more explicit, in the new context of a networked world of billions of users, trillions of devices, and zettabytes of data. Computing utility offers pay-per-use computing services through a 1) planet-scale, 2) low-entropy and 3) high-goodput utility. The three salient characteristics of computing utility are elaborated. Initial evidence is provided to support this viewpoint.
Energy efficiency in cloud computing data centers: a survey on software technologies
Cloud computing is a commercial and economic paradigm that has gained traction since 2006 and is presently the most significant technology in IT sector. From the notion of cloud computing to its energy efficiency, cloud has been the subject of much discussion. The energy consumption of data centres alone will rise from 200 TWh in 2016 to 2967 TWh in 2030. The data centres require a lot of power to provide services, which increases CO2 emissions. In this survey paper, software-based technologies that can be used for building green data centers and include power management at individual software level has been discussed. The paper discusses the energy efficiency in containers and problem-solving approaches used for reducing power consumption in data centers. Further, the paper also gives details about the impact of data centers on environment that includes the e-waste and the various standards opted by different countries for giving rating to the data centers. This article goes beyond just demonstrating new green cloud computing possibilities. Instead, it focuses the attention and resources of academia and society on a critical issue: long-term technological advancement. The article covers the new technologies that can be applied at the individual software level that includes techniques applied at virtualization level, operating system level and application level. It clearly defines different measures at each level to reduce the energy consumption that clearly adds value to the current environmental problem of pollution reduction. This article also addresses the difficulties, concerns, and needs that cloud data centres and cloud organisations must grasp, as well as some of the factors and case studies that influence green cloud usage.
Geographical Area Network—Structural Health Monitoring Utility Computing Model
In view of intensified disasters and fatalities caused by natural phenomena and geographical expansion, there is a pressing need for a more effective environment logging for a better management and urban planning. This paper proposes a novel utility computing model (UCM) for structural health monitoring (SHM) that would enable dynamic planning of monitoring systems in an efficient and cost-effective manner in form of a SHM geo-informatics system. The proposed UCM consists of networked SHM systems that send geometrical SHM variables to SHM-UCM gateways. Every gateway is routing the data to SHM-UCM servers running a geo-spatial patch health assessment and prediction algorithm. The inputs of the prediction algorithm are geometrical variables, environmental variables, and payloads. The proposed SHM-UCM is unique in terms of its capability to manage heterogeneous SHM resources. This has been tested in a case study on Qatar University (QU) in Doha Qatar, where it looked at where SHM nodes are distributed along with occupancy density in each building. This information was taken from QU routers and zone calculation models and were then compared to ideal SHM system data. Results show the effectiveness of the proposed model in logging and dynamically planning SHM.
MAFC: Multi-Agent Fog Computing Model for Healthcare Critical Tasks Management
In healthcare applications, numerous sensors and devices produce massive amounts of data which are the focus of critical tasks. Their management at the edge of the network can be done by Fog computing implementation. However, Fog Nodes suffer from lake of resources That could limit the time needed for final outcome/analytics. Fog Nodes could perform just a small number of tasks. A difficult decision concerns which tasks will perform locally by Fog Nodes. Each node should select such tasks carefully based on the current contextual information, for example, tasks’ priority, resource load, and resource availability. We suggest in this paper a Multi-Agent Fog Computing model for healthcare critical tasks management. The main role of the multi-agent system is mapping between three decision tables to optimize scheduling the critical tasks by assigning tasks with their priority, load in the network, and network resource availability. The first step is to decide whether a critical task can be processed locally; otherwise, the second step involves the sophisticated selection of the most suitable neighbor Fog Node to allocate it. If no Fog Node is capable of processing the task throughout the network, it is then sent to the Cloud facing the highest latency. We test the proposed scheme thoroughly, demonstrating its applicability and optimality at the edge of the network using iFogSim simulator and UTeM clinic data.
Mobile crowd computing: potential, architecture, requirements, challenges, and applications
Owing to the enormous advancement in miniature hardware, modern smart mobile devices (SMDs) have become computationally powerful. Mobile crowd computing (MCC) is the computing paradigm that uses public-owned SMDs to garner affordable high-performance computing (HPC). Though several empirical works have established the feasibility of mobile-based computing for various applications, there is a lack of comprehensive coverage of MCC. This paper aims to explore the fundamentals and other nitty–gritty of the idea of MCC in a comprehensive manner. Starting with an explicit definition of MCC, the enabling backdrops and the detailed architectural layouts of different models of MCC are presented, along with categorising different types of MCC based on infrastructure and application demands. MCC is compared extensively with other HPC systems (e.g. desktop grid, cloud, clusters and supercomputers) and similar mobile computing systems (e.g. mobile grid, mobile cloud, ad hoc mobile cloud, and mobile crowdsourcing). MCC being a complex system, various design requirements and considerations are extensively analysed. The potential benefits of MCC are meticulously mentioned, with special discussions on the ubiquity and sustainability of MCC. The issues and challenges of MCC are critically presented in light of further research scopes. Several real-world applications of MCC are identified and propositioned. Finally, to carry forward the accomplishment of the MCC vision, the future prospects are briefly elucidated.
Security issues in cloud environments: a survey
In the last few years, the appealing features of cloud computing have been fueling the integration of cloud environments in the industry, which has been consequently motivating the research on related technologies by both the industry and the academia. The possibility of paying-as-you-go mixed with an on-demand elastic operation is changing the enterprise computing model, shifting on-premises infrastructures to off-premises data centers, accessed over the Internet and managed by cloud hosting providers. Regardless of its advantages, the transition to this computing paradigm raises security concerns, which are the subject of several studies. Besides of the issues derived from Web technologies and the Internet, clouds introduce new issues that should be cleared out first in order to further allow the number of cloud deployments to increase. This paper surveys the works on cloud security issues, making a comprehensive review of the literature on the subject. It addresses several key topics, namely vulnerabilities, threats, and attacks, proposing a taxonomy for their classification. It also contains a thorough review of the main concepts concerning the security state of cloud environments and discusses several open research topics.
Understanding the determinants of cloud computing adoption
Purpose - The purpose of this paper is to investigate the factors that affect the adoption of cloud computing by firms belonging to the high-tech industry. The eight factors examined in this study are relative advantage, complexity, compatibility, top management support, firm size, technology readiness, competitive pressure, and trading partner pressure.Design methodology approach - A questionnaire-based survey was used to collect data from 111 firms belonging to the high-tech industry in Taiwan. Relevant hypotheses were derived and tested by logistic regression analysis.Findings - The findings revealed that relative advantage, top management support, firm size, competitive pressure, and trading partner pressure characteristics have a significant effect on the adoption of cloud computing.Research limitations implications - The research was conducted in the high-tech industry, which may limit the generalisability of the findings.Practical implications - The findings offer cloud computing service providers with a better understanding of what affects cloud computing adoption characteristics, with relevant insight on current promotions.Originality value - The research contributes to the application of new technology cloud computing adoption in the high-tech industry through the use of a wide range of variables. The findings also help firms consider their information technologies investments when implementing cloud computing.
The importance of nature-inspired meta-heuristic algorithms for solving virtual machine consolidation problem in cloud environments
Nowadays, cloud computing is known as an internet-based modern area among emerging technologies that brings up an environment, in which computing resources such as hardware, software, storage, etc. can be rented by cloud users based on a pay per use model. Since the size of cloud computing is widely expanding and the number of cloud users is also increasing day by day, high energy consumption becomes a serious concern in the operation of complex cloud data centers. In this regards, Virtual Machine (VM) consolidation plays a vital role in utilizing cloud resources in an efficient manner. It migrates the running VMs from overloaded Physical Machines (PMs) to other PMs considering multiple factors, such as migration overhead, energy consumption, resource utilization, and migration time. Since the VM consolidation issue is known as an NP-hard problem, various nature‐inspired meta-heuristic algorithms aiming to solve this problem have been utilized in recent years. However, a lack of systematic and detailed survey study in this field is obvious. Therefore, this gap motivated us to provide the current paper aiming to highlight the role of nature-inspired meta-heuristic algorithms in the VM consolidation problem, review the existing approaches, offer a detailed comparison of approaches based on important factors, and finally, outline the future directions.
An Analysis of Cloud Security Frameworks, Problems and Proposed Solutions
The rapidly growing use of cloud computing raises security concerns. This study paper seeks to examine cloud security frameworks, addressing cloud-associated issues and suggesting solutions. This research provides greater knowledge of the various frameworks, assisting in making educated decisions about selecting and implementing suitable security measures for cloud-based systems. The study begins with introducing cloud technology, its issues and frameworks to secure infrastructure, and an examination of the various cloud security frameworks available in the industry. A full comparison is performed to assess the framework’s focus, scope, approach, strength, limitations, implementation steps and tools required in the implementation process. The frameworks focused on in the paper are COBIT5, NIST (National Institute of Standards and Technology), ISO (International Organization for Standardization), CSA (Cloud Security Alliance) STAR and AWS (Amazon Web Services) well-architected framework. Later, the study digs into identifying and analyzing prevalent cloud security issues. This contains attack vectors that are inherent in cloud settings. Plus, this part includes the risk factor of top cloud security threats and their effect on cloud platforms. Also, it presents ideas and countermeasures to reduce the observed difficulties.
Cataloging health state utility estimates for Duchenne muscular dystrophy and related conditions
Duchenne muscular dystrophy (DMD) is a genetic disease resulting in progressive muscle weakness, loss of ambulation, and cardiorespiratory complications. Direct estimation of health-related quality of life for patients with DMD is challenging, highlighting the need for proxy measures. This study aims to catalog and compare existing published health state utility estimates for DMD and related conditions. Using two search strategies, relevant utilities were extracted from the Tufts Cost-Effectiveness Analysis Registry, including health states, utility estimates, and study and patient characteristics. Analysis One identified health states with comparable utility estimates to a set of published US patient population utility estimates for DMD. A minimal clinically important difference of [+ or -] 0.03 was applied to each DMD utility estimate to establish a range, and the registry was searched to identify other health states with associated utilities that fell within each range. Analysis Two used pre-defined search terms to identify health states clinically similar to DMD. Mapping was based on the degree of clinical similarity. Analysis One identified 4,308 unique utilities across 2,322 cost-effectiveness publications. The health states captured a wide range of acute and chronic conditions; 34% of utility records were extrapolated for US populations (n = 1,451); 1% were related to pediatric populations (n = 61). Analysis Two identified 153 utilities with health states clinically similar to DMD. The median utility estimates varied among identified health states. Health states similar to the early non-ambulatory DMD phase exhibited the greatest difference between the median estimate of the sample (0.39) and the existing estimate from published literature (0.21). When available estimates are limited, using novel search strategies to identify utilities of clinically similar conditions could be an approach for overcoming the information gap. However, it requires careful evaluation of the utility instruments, tariffs, and raters (proxy or self).