Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,662 result(s) for "Concurrency control"
Sort by:
IoT transaction processing through cooperative concurrency control on fog–cloud computing environment
In cloud–fog environments, the opportunity to avoid using the upstream communication channel from the clients to the cloud server all the time is possible by fluctuating the conventional concurrency control protocols. Through the present paper, the researcher aimed to introduce a new variant of the optimistic concurrency control protocol. Through the deployment of augmented partial validation protocol, IoT transactions that are read-only can be processed at the fog node locally. For final validation, update transactions are the only ones sent to the cloud. Moreover, the update transactions go through partial validation at the fog node which makes them more opportunist to commit at the cloud. This protocol reduces communication and computation at the cloud as much as possible while supporting scalability of the transactional services needed by the applications running in such environments. Based on numerical studies, the researcher assessed the partial validation procedure under three concurrency protocols. The study’s results indicate that employing the proposed mechanism shall generate benefits for IoT users. These benefits are obtained from transactional services. We evaluated the effect of deployment the partial validation at the fog node for the three concurrency protocols, namely AOCCRBSC, AOCCRB and STUBcast. We performed a set of intensive experiments to compare the three protocols with and without such deployment. The result reported a reduction in miss rate, restart rate and communication delay in all of them. The researcher found that the proposed mechanism reduces the communication delay significantly. They found that the proposed mechanism shall enable low-latency fog computing services of the IoT applications that are a delay sensitive.
Gria: an efficient deterministic concurrency control protocol
Deterministic databases are able to reduce coordination costs in a replication. This property has fostered a significant interest in the design of efficient deterministic concurrency control protocols. However, the state-of-the-art deterministic concurrency control protocol Aria has three issues. First, it is impractical to configure a suitable batch size when the read-write set is unknown. Second, Aria running in low-concurrency scenarios, e.g., a single-thread scenario, suffers from the same conflicts as running in high-concurrency scenarios. Third, the single-version schema brings write-after-write conflicts. To address these issues, we propose Gria, an efficient deterministic concurrency control protocol. Gria has the following properties. First, the batch size of Gria is auto-scaling. Second, Gria's conflict probability in low-concurrency scenarios is lower than that in high-concurrency scenarios. Third, Gria has no write-after-write conflicts by adopting a multi-version structure. To further reduce conflicts, we propose two optimizations: a reordering mechanism as well as a rechecking strategy. The evaluation result on two popular benchmarks shows that Gria outperforms Aria by 13x.
NeuChain+: A Sharding Permissioned Blockchain System with Ordering-Free Consensus
Permissioned blockchains are widely used in scenarios such as digital assets, supply chains, government services, and Web 3.0, but their development is hindered by low throughput and scalability. Blockchain sharding addresses these issues by dividing the ledger into disjoint shards that can be processed concurrently. However, since cross-shard transactions require the collaboration of multiple shards, blockchain sharding needs a commit protocol to ensure the atomicity of executing these transactions, significantly impacting system performance. To this end, by exploiting the characteristics of deterministic ordering, we propose a cross-shard transaction processing protocol called cross-reserve, which eliminates this costly cross-shard coordination while providing the same consistency and atomicity guarantee. Based on the ordering-free execute–validate (EV) architecture, we implemented a blockchain prototype called NeuChain+, which further reduces the cross-shard transaction processing overhead using the pipelined read sets transmission. Experimental results show that NeuChain+ is scalable and outperforms state-of-the-art blockchain systems with 1.7–75.3× throughput under the SmallBank workload.
Hybrid concurrency control protocol for data sharing among heterogeneous blockchains
With the development of information technology and cloud computing, data sharing has become an important part of scientific research. In traditional data sharing, data is stored on a third-party storage platform, which causes the owner to lose control of the data. As a result, there are issues of intentional data leakage and tampering by third parties, and the private information contained in the data may lead to more significant issues. Furthermore, data is frequently maintained on multiple storage platforms, posing significant hurdles in terms of enlisting multiple parties to engage in data sharing while maintaining consistency. In this work, we propose a new architecture for applying blockchains to data sharing and achieve efficient and reliable data sharing among heterogeneous blockchains. We design a new data sharing transaction mechanism based on the system architecture to protect the security of the raw data and the processing process. We also design and implement a hybrid concurrency control protocol to overcome issues caused by the large differences in blockchain performance in our system and to improve the success rate of data sharing transactions. We took Ethereum and Hyperledger Fabric as examples to conduct cross-blockchain data sharing experiments. The results show that our system achieves data sharing across heterogeneous blockchains with reasonable performance and has high scalability.
TB-Collect: Efficient Garbage Collection for Non-Volatile Memory Online Transaction Processing Engines
Existing databases supporting Online Transaction Processing (OLTP) workloads based on non-volatile memory (NVM) almost all use Multi-Version Concurrency Control (MVCC) protocol to ensure data consistency. MVCC allows multiple transactions to execute concurrently without lock conflicts, reducing the wait time between read and write operations, and thereby significantly increasing the throughput of NVM OLTP engines. However, it requires garbage collection (GC) to clean up the obsolete tuple versions to prevent storage overflow, which consumes additional system resources. Furthermore, existing GC approaches in NVM OLTP engines are inefficient because they are based on methods designed for dynamic random access memory (DRAM) OLTP engines, without considering the significant differences in read/write bandwidth and cache line size between NVM and DRAM. These approaches either involve excessive random NVM access (traversing tuple versions) or lead to too many additional NVM write operations, both of which degrade the performance and durability of NVM. In this paper, we propose TB-Collect, a high-performance GC approach specifically designed for NVM OLTP engines. On the one hand, TB-Collect separates tuple headers and contents, storing data in an append-only manner, which greatly reduces NVM writes. On the other hand, TB-Collect performs GC at the block level, eliminating the need to traverse tuple versions and improving the utilization of reclaimed space. We have implemented TB-Collect on DBx1000 and MySQL. Experimental results show that TB-Collect achieves 1.15 to 1.58 times the throughput of existing methods when running TPCC and YCSB workloads.
A Lightweight DAG-RAFT Hybrid Consensus Framework for Blockchain-Enabled Digital Enterprise Management in Embedded Systems
This manuscript addresses a core challenge in embedded digital enterprise management: reconciling operational efficiency with system security, given the heavy overhead of blockchain on resource-constrained devices. A lightweight Directed Acyclic Graph (DAG)–Reliable, Replicated, Redundant, and Fault-Tolerant (RAFT) hybrid is presented, in which a DAG layer asynchronously orders concurrent transactions through topological relations and predecessor hashes, feeding batched updates to an enhanced RAFT with adaptive heartbeats and dynamic leader election. Version–vector conflict detection and idempotent writes are used to improve log replication and state synchronization. On a 200-node testbed, peak memory and CPU utilization reach 25.9 MB ± 1.8 MB and 20.8% ± 1.6%, respectively; under high load, the system attains 896.5 effective transactions per second (ETPS) with an end-to-end confirmation latency of 203.5 ± 11.2 ms. These results indicate a practical balance of high concurrency, low resource consumption, strong consistency, and robust security for embedded digital enterprise scenarios.
RCBench: an RDMA-enabled transaction framework for analyzing concurrency control algorithms
Distributed transaction processing over the TCP/IP network suffers from the weak transaction scalability problem, i.e., its performance drops significantly when the number of involved data nodes per transaction increases. Although quite a few of works over the high-performance RDMA-capable network are proposed, they mainly focus on accelerating distributed transaction processing, rather than solving the weak transaction scalability problem. In this paper, we propose RCBench , an RDMA-enabled transaction framework, which serves as a unified evaluation tool for assessing the transaction scalability of various concurrency control algorithms. The usability and advancement of RCBench primarily come from the proposed concurrency control primitives , which facilitate the convenient implementation of RDMA-enabled concurrency control algorithms. Various optimization principles are proposed to ensure that concurrency control algorithms in RCBench can fully benefit from the advantages offered by RDMA-capable networks. We conduct extensive experiments to evaluate the scalability of mainstream concurrency control algorithms. The results show that by exploiting the capabilities of RDMA, concurrency control algorithms in RCBench can obtain 42X performance improvement, and transaction scalability can be achieved in RCBench.
C5: cloned concurrency control that always keeps up
Asynchronously replicated primary-backup databases are commonly deployed to improve availability and offload read-only transactions. To both apply replicated writes from the primary and serve read-only transactions, the backups implement a cloned concurrency control protocol. The protocol ensures read-only transactions always return a snapshot of state that previously existed on the primary. This compels the backup to exactly copy the commit order resulting from the primary’s concurrency control. Existing cloned concurrency control protocols guarantee this by limiting the backup’s parallelism. As a result, the primary’s concurrency control executes some workloads with more parallelism than these protocols. In this paper, we prove that this parallelism gap leads to unbounded replication lag, where writes can take arbitrarily long to replicate to the backup and which has led to catastrophic failures in production systems. We then design C5, the first cloned concurrency protocol to provide bounded replication lag. We implement two versions of C5: Our evaluation in MyRocks, a widely deployed database, demonstrates C5 provides bounded replication lag. Our evaluation in Cicada, a recent in-memory database, demonstrates C5 keeps up with even the fastest of primaries.
An optimized deterministic concurrency control approach for geo-distributed transaction processing on permissioned blockchains
Concurrency control is crucial for ensuring consistency and isolation in distributed transaction processing. Traditional concurrency control algorithms, such as locking-based protocols, usually suffer from performance degradation due to heavy transaction coordination overheads. To overcome this problem, deterministic concurrency control approaches are widely adopted in many systems since they can avoid coordination overhead by eliminating uncertainty. In these systems, every node receives identical transaction batches, orders them according to specific rules, and executes them concurrently in a determined correct sequence. However, some transactions might have to be aborted in concurrent execution, wasting expensive network bandwidth and computing resources. We find that this problem significantly lowers system performance, especially in geographically distributed settings where network communication is a bottleneck. To exploit deterministic concurrency control efficiently in geo-distributed application scenarios, this paper studies an optimized deterministic concurrency control approach GB-DCC for permissioned blockchain applications which is a new type of distributed transaction processing systems. Three general optimization strategies are proposed: deterministic pre-execution, mini-batch partitioning, and deterministic re-execution. Experiments show that under the YCSB-A benchmark workload, these strategies can reduce the distributed system’s bandwidth consumption by 17.8% and improve the performance obviously.
Adaptive conflict resolution for IoT transactions: A reinforcement learning-based hybrid validation protocol
This paper introduces a novel Reinforcement Learning-Based Hybrid Validation Protocol (RL-CC) that revolutionizes conflict resolution for time-sensitive IoT transactions through adaptive edge-cloud coordination. Efficient transaction management in sensor-based systems is crucial for maintaining data integrity and ensuring timely execution within the constraints of temporal validity. Our key innovation lies in dynamically learning optimal scheduling policies that minimize transaction aborts while maximizing throughput under varying workload conditions. The protocol consists of two validation phases: an edge validation phase, where transactions undergo preliminary conflict detection and prioritization based on their temporal constraints, and a cloud validation phase, where a final conflict resolution mechanism ensures transactional correctness on a global scale. The RL-based mechanism continuously adapts decision-making by learning from system states, prioritizing transactions, and dynamically resolving conflicts using a reward function that accounts for key performance parameters, including the number of conflicting transactions, cost of aborting transactions, temporal validity constraints, and system resource utilization. Experimental results demonstrate that our RL-CC protocol achieves a 90% reduction in transaction abort rates (5% vs. 45% for 2PL), 3x higher throughput (300 TPS vs. 100 TPS), and 70% lower latency compared to traditional concurrency control methods. The proposed RL-CC protocol significantly reduces transaction abort rates, enhances concurrency management, and improves the efficiency of sensor data processing by ensuring that transactions are executed within their temporal validity window. The results suggest that the RL-based approach offers a scalable and adaptive solution for sensor-based applications requiring high-concurrency transaction processing, such as Internet of Things (IoT) networks, real-time monitoring systems, and cyber-physical infrastructures.