Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
10,202
result(s) for
"Data replication"
Sort by:
An efficient and improved multi-objective optimized replication management with dynamic and cost aware strategies in cloud computing data center
by
Thanka, M. Roshni
,
Umamaheswari, P.
,
Edwin, E. Bijolin
in
Algorithms
,
Availability
,
Cloud computing
2019
Recent technology which focus in cloud computing with ICT based service providers for better challenges in the field of online services. This provides the computing world as an utility based scenario of sharing the given resources of different data centeres with enormous opportunities. Though replicas of a data file has increased, the performance and availability of data also increases. In this paper the different aspects on dynamic, cost-aware with the data replication method through optimization is proposed which identifies the less amount of information of data replication which is required to make sure that the data availability increases with the increase in the replication process. The multi objective optimization strategy for the cost of replication from higher-cost data centers to lower-cost data centers being implemented by the concept of Improved knapsack algorithm and considering the availability and load balancing in the replication process. Data can be managed effectively by file replication, which reduces effectively the file service time and access latency, which increases the file availability and the system get improved through load balancing. An efficient and improved multi-objective optimized replication management (EIMORM) can solve these optimal solutions by balancing among these optimization objectives. Some experiments clearly shows that the EIMORM is much more energy efficiency and the performance in the replication system of the Hadoop Distributed File System (HDFS) with the multi-objective evolutionary stated algorithm with the performance, load balancing in cloud storage clustors. Now considering the energy efficiency also considering the bandwidth of system. Hence the results taken through simulator can support the energy efficiency as the guidance in the areas of data replication.
Journal Article
A novel Location-Aware job scheduling framework for optimizing Fog-Cloud IoT systems: insights from dynamic traffic management
by
Zhou, Xiaomeng
,
Zhu, Mingjun
,
Yu, Xiaomo
in
Algorithms
,
Cloud computing
,
Computer Communication Networks
2025
The rapid rise of IoT devices, which are expected to reach over 75 billion by 2025 and generate 175 zettabytes of data each year, has shown that traditional cloud computing has problems with latency and bandwidth. This means that fog-cloud architectures are needed for IoT processing in real time. This paper suggests the DLSFC-Enhanced (DLSFC-E) algorithm, which builds on the Data-Locality Aware Job Scheduling in Fog-Cloud (DLSFC) technique and uses a multi-objective optimization framework to solve these problems. DLSFC-E uses a Directed Acyclic Graph (DAG) to show how tasks depend on each other, adds dynamic data replication based on how people use the system, and includes realistic network dynamics (bandwidth 10–100 Mbps ± 20%, latency 1–10 ms ± 15%) in simulations of a three-layer IoT-fog-cloud system with 10 fog nodes. CloudSim 4.0 simulations and real-world traffic statistics from Amsterdam on a 5-node physical testbed are used to check the method. The results reveal that DLSFC-E is 85% in line with the best Linear Programming (LP) solutions, cuts the makespan by 2.8 to 5.2 times compared to centralized methods, and lowers migration expenses by 15% compared to DLSFC. It improves runtime scalability by 40% for 1000 or more activities
complexity and energy efficiency by 14%, with 92% of tasks keeping latency below 10 ms. These results, which were tested on datasets ranging from 30 to 600 MB, show that DLSFC-E is strong enough for IoT installations on a broad scale. The study finds that DLSFC-E is a scalable and efficient way to schedule things, but there are still problems to solve, such as making sure it works when the network goes down and adjusting the weight of tasks as needed.
Journal Article
A novel dynamic data replication strategy to improve access efficiency of cloud storage
2020
Cloud computing provides on demand services to cloud users, and one among them is storage. Currently, large amount of data gets generated and demand an enormous storage. Users can avail the privilege to store their data remotely and can access them through Internet. Of course, the adoption of cloud lends the kind of storage that the user wants. Since data gets accumulated, the time it takes to store and retrieve the data is very long and difficult. Also, unfortunately the existing method of storage is to be optimized for better performance. The factors that affect the performance of cloud storage are response time, data availability and migration cost. Hence to improve these factors the data can be replicated to multiple locations. The decision on which data to be replicated, number of replicas to be created, where the replica has to be placed, management of the replicated data and the provision of optimal replica to the user are the major challenges involved in dynamic replication. We intend to propose, a novel dynamic data replication strategy with intelligent water drop (IWD) algorithm to address the challenges of replication and for the management of cloud storage. The popularity and size of the data are considered for replication. A swarm intelligence based optimization algorithm named IWD algorithm is used to optimize the process of replication and management of storage in cloud. We have compared our D2R-IWD algorithm with popular optimization techniques such as PSO, GA and found out that our methodology gives better result in terms of access efficiency for several test cases thereby improve the performance of cloud.
Journal Article
The Incremental Load Balance Cloud Algorithm by Using Dynamic Data Deployment
2019
The rapid advancement of network technology has changed the way the world operates and has produced a large number of network application services for users. In order to provide more convenient services, the network service providers need to provide a more stable and high-capacity system. Therefore, cloud computing technology has been developed in the recent decade. The network service providers can reduce the cost related to the cloud computing services by using the virtualizing and data replicating techniques. Besides, an efficient information duplication strategy is necessary to reduce the workload and enhance the ability of the system. Therefore, a three-phase Dynamic Data Replication Algorithms (DDRA) for deploying the resources has been proposed in this paper to improve the efficiency of the information duplication under the cloud storage system. For the first two phases, the proposed algorithm is designed to determine the suitable service nodes to achieve the balance of workload according to the service nodes’ workloads. In the third phase, a dynamic duplication deployment scheme has been designed to achieve the higher access performance and better load balancing between service nodes for the overall environment. As a result, the proposed algorithm can enhance the availability, access efficiency and load balancing under the hierarchical cloud computing environment.
Journal Article
Efficient data integrity and data replication in cloud using stochastic diffusion method
2019
Cloud computing will provide scalable computing as well as storage resources where more data intensive applications will be developed in a computing environment. Owing to the existence of such security threats in the cloud, several mechanisms are being proposed for allowing the users to audit the integrity of data along with the public key of the owner of the data even before making use of the cloud data. Replicating of data in cloud servers through multiple data centers offers better availability, scalability, and durability. The correctness of choice of the right type of public key of the previous mechanisms is based on the security of the public key infrastructure (PKI). Although traditional PKI has been widely used in the construction of public key cryptography, it still faces many security risks, especially in the aspect of managing certificates. There are different applications having different types of quality of service (QoS) needs. In order to support the QoS requirement continuously, the application of such data corruption for this work will be an efficient integrity of data replication that makes use of a stochastic diffusion search (SDS) algorithm that has been proposed. This SDS is that technique of a multi-agent global optimisation which has been based on the behaviour of ants that has been rooted in the partial evaluation of that of an objective function along with direct communication among agents. The proposed SDS algorithm will minimize the replication cost of data. The results of these experiments have shown that the mechanism will be able to demonstrate the effectiveness of this proposed algorithm which is in the replication of data as well as its recovery. The proposed method when appropriately compared with the cost effective replication of dynamic data given by Li et al. proves that the average recovery time is less by 18.18% for the 250 number of requested nodes, by 14.28% for the 500 number of requested nodes, by 11.11% for the 750 number of requested nodes and by 8.69% for the 1000 number of requested nodes.
Journal Article
A File Group Data Replication Algorithm for Data Grids
by
Rahmani, Amir Masoud
,
Daniel, Helder A.
,
Azari, Leila
in
Access time
,
Algorithms
,
Computer Science
2017
In recent years data grids have been deployed and grown in many scientific experiments and data centers. The deployment of such environments has allowed grid users to gain access to a large number of distributed data. Data replication is a key issue in a data grid and should be applied intelligently because it reduces data access time and bandwidth consumption for each grid site. Therefore this area will be very challenging as well as providing much scope for improvement. In this paper, we introduce a new dynamic data replication algorithm named Popular File Group Replication, PFGR which is based on three assumptions: first, users in a grid site (Virtual Organization) have similar interests in files and second, they have the temporal locality of file accesses and third, all files are read-only. Based on file access history and first assumption, PFGR builds a connectivity graph for a group of dependent files in each grid site and replicates the most popular group files to the requester grid site. After that, when a user of that grid site needs some files, they are available locally. The simulation results show that our algorithm increases performance by minimizing the mean job execution time and bandwidth consumption and avoids unnecessary replication.
Journal Article
On-Grid GPU development
by
Britton, David
,
Borbely, Albert
,
Skipsey, Samuel
in
Accelerator cards
,
Containers
,
Data replication
2025
Over the last few years, an increasing number of sites have started to offer access to GPU accelerator cards but in many places they remain underutilised. The experiment collaborations are gradually increasing the fraction of their code that can exploit GPUs, driven in many case by developments of specific reconstruction algorithms to exploit the HLT farms when data is not being taken. However, there is no wide-spread usage of GPUs on the Grid and no mechanism, yet, to pledge GPU resources. Whilst the experiments gradually make progress porting their production code, and external projects such as Celeritas and AdePT tackle key common tasks such as the acceleration of E/M calorimeter simulation as a plug-in for GEANT4, there is no easy way for smaller groups or individual developers to develop GPU usage in a way that is easily transferred to the Grid environment. Currently, a user typically develops code on a local GPU in an interactive manner but there is significant overhead in subsequently containerising this work and moving it to the Grid environment. Indeed, many user jobs are not big enough to benefit from this last step and many sites must then maintain GPUs that are not integrated with the Grid infrastructure.We have developed a proof-of-principle solution to enable interactive user access to Grid GPUs, enabling the initial development to take place on-Grid. This will ensure the development and production environments are identical and enable sites to move more GPUs to the Grid. An interactive development environment has been implemented with interactive HTCondor jobs and Apptainer containers. GPUs are split into MIG instances to allow for simultaneous multiuser utilisation. Users can install packages on the fly, giving them control over package versions as well as use what’s available on CVMFS. Once development is done the sandbox container can be made imputable and submitted to either the local batch style GPU que or sent to the rest of the GPUs available on the Grid. The nature of interactive development means many hurdles had to be overcome such as: User authentication, security considerations, data replication to other sites, as well as management tools to allowing users to keep track of their environments and jobs.
Conference Proceeding
Small Telescopes: Detectability and the Evaluation of Replication Results
2015
This article introduces a new approach for evaluating replication results. It combines effect-size estimation with hypothesis testing, assessing the extent to which the replication results are consistent with an effect size big enough to have been detectable in the original study. The approach is demonstrated by examining replications of three well-known findings. Its benefits include the following: (a) differentiating \"unsuccessful\" replication attempts (i.e., studies yielding p > .05) that are too noisy from those that actively indicate the effect is undetectably different from zero, (b) \"protecting\" true findings from underpowered replications, and (c) arriving at intuitively compelling inferences in general and for the revisited replications in particular.
Journal Article
A Two-Level Fuzzy Value-Based Replica Replacement Algorithm in Data Grids
2016
One of the challenges of data grid is to access widely distributed data fast and efficiently and providing maximum data availability with minimum latency. Data replication is an efficient way used to address this challenge by replicating and storing replicas, making it possible to access similar data in different locations of the data grid and can shorten the time of getting the files. However, as the number and storage size of grid sites is limited and restricted, an optimized and effective replacement algorithm is needed to improve the efficiency of replication. In this paper, the authors propose a novel two-level replacement algorithm which uses Fuzzy Replica Preserving Value Evaluator System (FRPVES) for evaluating the value of each replica. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid projects. Results from simulation procedure show that the authors' proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, total number of replications and effective network usage.
Journal Article
A Proposal to Organize and Promote Replications
by
Wilson, Alistair J.
,
Niederle, Muriel
,
Coffman, Lucas C.
in
Applied economics
,
Bibliographic citations
,
Citation analysis
2017
We make a two-pronged proposal to (i) strengthen the incentives for replication work and (ii) better organize and draw attention to the replications that are conducted. First we propose that top journals publish short “replication reports.” These reports could summarize novel work replicating an existing high-impact paper, or they could highlight a replication result embedded in a wider-scope published paper. Second, we suggest incentivizing replications with the currency of our profession: citations. Enforcing a norm of citing replication work alongside the original would provide incentives for replications to both authors and journals.
Journal Article