Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
14,975 result(s) for "Transaction processing"
Sort by:
A survey on hybrid transactional and analytical processing
To provide applications with the ability to analyze fresh data and eliminate the time-consuming ETL workflow, hybrid transactional and analytical (HTAP) systems have been developed to serve online transaction processing and online analytical processing workloads in a single system. In recent years, HTAP systems have attracted considerable interest from both academia and industry. Several new architectures and technologies have been proposed. This paper provides a comprehensive overview of these HTAP systems. We review recently published papers and technical reports in this field and broadly classify existing HTAP systems into two categories based on their data formats: monolithic and hybrid HTAP. We further classify hybrid HTAP into four sub-categories based on their storage architecture: row-oriented, column-oriented, separated, and hybrid. Based on such a taxonomy, we outline each stream’s design challenges and performance issues (e.g., the contradictory format demand for monolithic HTAP). We then discuss potential solutions and their trade-offs by reviewing noteworthy research findings. Finally, we summarize emerging HTAP applications, benchmarks, future trends, and open problems.
Blockchain's Impact on Securing Online Transactions
The security of online transactions has become a difficult concern for both businesses and consumers. With the increasing volume of transactions occurring online, the need for robust security measures has never been more pressing. Blockchain technology, initially developed to support cryptocurrencies like Bitcoin, has emerged as a powerful tool in enhancing the security of online transactions.
Proposed framework for enhancing integrity technique using distributed query operation
Database growth and Storage problems are main concern for large and small enterprises nowadays, which significantly influences the efficiency of database applications. Database archiving is one of the solutions available for management. Many issues have been identified as a result of the use of archive databases, including the elimination of inactive data from online transaction processing systems (OLTP), performance and integrity management. The aim of this paper is to propose a framework for storing OLTP and archiving data in a distributed database environment as part of an integrated system. Maintaining the integrity between an OLTP database and an archiving database is crucial, but even more critical is the performance of the required query. The main aspect of the proposed framework is that it will not only ensure data integrity for primary and unique keys between OLTP and archive database, but it will also improve query performance by introducing parallel processing and query execution plans to maintain that integrity.
What makes Ethereum blockchain transactions be processed fast or slow? An empirical study
The Ethereum platform allows developers to implement and deploy applications called ÐApps onto the blockchain for public use through the use of smart contracts. To execute code within a smart contract, a paid transaction must be issued towards one of the functions that are exposed in the interface of a contract. However, such a transaction is only processed once one of the miners in the peer-to-peer network selects it, adds it to a block, and appends that block to the blockchain This creates a delay between transaction submission and code execution. It is crucial for ÐApp developers to be able to precisely estimate when transactions will be processed, since this allows them to define and provide a certain Quality of Service (QoS) level (e.g., 95% of the transactions processed within 1 minute). However, the impact that different factors have on these times have not yet been studied. Processing time estimation services are used by ÐApp developers to achieve predefined QoS. Yet, these services offer minimal insights into what factors impact processing times. Considering the vast amount of data that surrounds the Ethereum blockchain, changes in processing times are hard for ÐApp developers to predict, making it difficult to maintain said QoS. In our study, we build random forest models to understand the factors that are associated with transaction processing times. We engineer several features that capture blockchain internal factors, as well as gas pricing behaviors of transaction issuers. By interpreting our models, we conclude that features surrounding gas pricing behaviors are very strongly associated with transaction processing times. Based on our empirical results, we provide ÐApp developers with concrete insights that can help them provide and maintain high levels of QoS.
Self-Adapting CPU Scheduling for Mixed Database Workloads via Hierarchical Deep Reinforcement Learning
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database environments comprising Online Transaction Processing (OLTP), Online Analytical Processing (OLAP), vector processing, and background maintenance workloads. Our approach introduces three key innovations: first, a symmetric two-tier control architecture where a meta-controller allocates CPU budgets across workload categories using policy gradient methods while specialized sub-controllers optimize process-level resource allocation through continuous action spaces; second, graph neural network-based dependency modeling that captures complex inter-process relationships and communication patterns while preserving inherent symmetries in database architectures; and third, meta-learning integration with curiosity-driven exploration enabling rapid adaptation to previously unseen workload patterns without extensive retraining. The framework incorporates a multi-objective reward function balancing Service Level Objective (SLO) adherence, resource efficiency, symmetric fairness metrics, and system stability. Experimental evaluation through high-fidelity digital twin simulation and production deployment demonstrates substantial performance improvements: 43.5% reduction in p99 latency violations for OLTP workloads and 27.6% improvement in overall CPU utilization, with successful scaling to 10,000 concurrent processes maintaining sub-3% scheduling overhead. This work represents a significant advancement toward truly autonomous database resource management, establishing a foundation for next-generation self-optimizing database systems with implications extending to broader orchestration challenges in cloud-native architectures.
Decision support system using data warehouse for retail system
CMK company is a retail company that sell and distributes miniature vehicle items to small stores. CMK has several branches in various countries that use for operational and storage warehouse. This company faces the problem for analysing sales to decide the useful strategy since this company only uses Online Transaction Processing (OLTP) database. This study implements Online Analysis Processing (OLAP) database and data warehouse to provide useful information for this company by using the nine-step method designed by Kimball & Ross. The data will be performed into dashboard to make easier for CMK Company to analyze data. Furthermore, many useful information can be provided in short and efficient time.
TB-Collect: Efficient Garbage Collection for Non-Volatile Memory Online Transaction Processing Engines
Existing databases supporting Online Transaction Processing (OLTP) workloads based on non-volatile memory (NVM) almost all use Multi-Version Concurrency Control (MVCC) protocol to ensure data consistency. MVCC allows multiple transactions to execute concurrently without lock conflicts, reducing the wait time between read and write operations, and thereby significantly increasing the throughput of NVM OLTP engines. However, it requires garbage collection (GC) to clean up the obsolete tuple versions to prevent storage overflow, which consumes additional system resources. Furthermore, existing GC approaches in NVM OLTP engines are inefficient because they are based on methods designed for dynamic random access memory (DRAM) OLTP engines, without considering the significant differences in read/write bandwidth and cache line size between NVM and DRAM. These approaches either involve excessive random NVM access (traversing tuple versions) or lead to too many additional NVM write operations, both of which degrade the performance and durability of NVM. In this paper, we propose TB-Collect, a high-performance GC approach specifically designed for NVM OLTP engines. On the one hand, TB-Collect separates tuple headers and contents, storing data in an append-only manner, which greatly reduces NVM writes. On the other hand, TB-Collect performs GC at the block level, eliminating the need to traverse tuple versions and improving the utilization of reclaimed space. We have implemented TB-Collect on DBx1000 and MySQL. Experimental results show that TB-Collect achieves 1.15 to 1.58 times the throughput of existing methods when running TPCC and YCSB workloads.
Towards intelligent database systems using clusters of SQL transactions
Transactions are the bread-and-butter of database management system (DBMS) industry. When you check your bank balance, pay bill, or move money from saving to chequing account, transactions are involved. That transactions are self-similar—whether you pay a utility company or credit card, it is still a ‘pay bill’ transaction—has been noted before. Somewhat surprisingly, that property remains largely unexploited, barring some notable exceptions. The research reported in this paper begins to build ‘intelligence’ into database systems by offering built-in transaction classification and clustering. The utility of such an approach is demonstrated by showing how it simplifies DBMS monitoring and troubleshooting. The well-known DBSCAN algorithm clusters online transaction processing (OLTP) transactions: this paper’s contribution is in demonstrating a robust server-side feature extraction approach, rather than the previously suggested and error-prone log-mining approach. It is shown how ‘DBSCAN + angular cosine distance function’ finds better clusters than the previously tried combinations, and simplifies DBSCAN parameter tuning—a known nontrivial task. DBMS troubleshooting efficacy is demonstrated by identifying the root causes of several real-life performance problems: problematic transaction rollbacks; performance drifts; system-wide issues; CPU and memory bottlenecks; and so on. It is also shown that the cluster count remains unchanged irrespective of system load—a desirable but often overlooked property. The transaction clustering solution has been implemented inside the popular MySQL DBMS, although most modern relational database systems can benefit from the ideas described herein.
FIR: Achieving High Throughput and Fast Recovery in a Non-Volatile Memory Online Transaction Processing Engine
Existing databases supporting Online Transaction Processing (OLTP) workloads based on non-volatile memory (NVM) have not fully leveraged hardware characteristics, resulting in an imbalance between throughput and recovery performance. In this paper, we conclude with the reason why existing designs fail to achieve both: placing indexes on NVM results in numerous random writes and write amplification for index updates, leading to a decrease in system performance. Placing indexes on dynamic random access memory (DRAM) results in much time consumption for rebuilding indexes during recovery. To address this issue, we propose FIR, an NVM OLTP Engine with the fast rebuilding of the DRAM indexes, achieving instant system recovery while maintaining high throughput. Firstly, we design an index checkpoint strategy. During recovery, the indexes are quickly rebuilt by the bottom-up algorithm with index checkpoints. Then, to achieve instant recovery of the entire engine after rebuilding indexes, we optimize the existing log-free design by leveraging time-ordered storage, which significantly reduces the number of NVM writes. We also implement garbage collection based on data redistribution, enhancing system availability. The experimental results demonstrate that FIR achieves 98% of the performance of state-of-the-art OLTP Engine when running TPCC and YCSB. And the recovery speed of FIR is 43.6×–54.5× faster, achieving near-instantaneous recovery.
Credit Card Fraud Detection Using Hidden Markov Model
Due to a rapid advancement in the electronic commerce technology, the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the sequence of operations in credit card transaction processing using a hidden Markov model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show the effectiveness of our approach and compare it with other techniques available in the literature.