Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,047 result(s) for "data storage optimization"
Sort by:
Blockchain-Powered Healthcare Systems: Enhancing Scalability and Security with Hybrid Deep Learning
The rapid advancements in technology have paved the way for innovative solutions in the healthcare domain, aiming to improve scalability and security while enhancing patient care. This abstract introduces a cutting-edge approach, leveraging blockchain technology and hybrid deep learning techniques to revolutionize healthcare systems. Blockchain technology provides a decentralized and transparent framework, enabling secure data storage, sharing, and access control. By integrating blockchain into healthcare systems, data integrity, privacy, and interoperability can be ensured while eliminating the reliance on centralized authorities. In conjunction with blockchain, hybrid deep learning techniques offer powerful capabilities for data analysis and decision making in healthcare. Combining the strengths of deep learning algorithms with traditional machine learning approaches, hybrid deep learning enables accurate and efficient processing of complex healthcare data, including medical records, images, and sensor data. This research proposes a permissions-based blockchain framework for scalable and secure healthcare systems, integrating hybrid deep learning models. The framework ensures that only authorized entities can access and modify sensitive health information, preserving patient privacy while facilitating seamless data sharing and collaboration among healthcare providers. Additionally, the hybrid deep learning models enable real-time analysis of large-scale healthcare data, facilitating timely diagnosis, treatment recommendations, and disease prediction. The integration of blockchain and hybrid deep learning presents numerous benefits, including enhanced scalability, improved security, interoperability, and informed decision making in healthcare systems. However, challenges such as computational complexity, regulatory compliance, and ethical considerations need to be addressed for successful implementation. By harnessing the potential of blockchain and hybrid deep learning, healthcare systems can overcome traditional limitations, promoting efficient and secure data management, personalized patient care, and advancements in medical research. The proposed framework lays the foundation for a future healthcare ecosystem that prioritizes scalability, security, and improved patient outcomes.
Dynamic link utilization empowered by reinforcement learning for adaptive storage allocation in MANET
In modern wireless networks, mobile nodes often deal with the challenge of maintaining a sufficient number of data packets due to limited storage capacity within each cluster. It adversely impacts network performance by compromising data quality during transmissions. The ensuing delays, caused by data packets awaiting storage allocation, result in reduced throughput and increased end-to-end latency. To effectively address these issues, we present a Dynamic Link Utilization with Reinforcement Learning (DLU-RL) method, which is designed to optimize storage allocation for communication data packets, significantly enhancing network performance. Instead of static allocation, DLU-RL employs dynamic strategies guided by reinforcement learning algorithms. This innovative method not only tackles storage constraints but also proactively adapts to varying network conditions and traffic patterns. In our approach, we first perform a comprehensive analysis of storage capacities across all nodes, establishing a baseline for dynamic resource allocation. The DLU-RL framework then swiftly assigns storage space based on real-time demand and priority, optimizing storage utilization on the fly. As a result of implementing DLU-RL, substantial enhancements in throughput and concurrent minimization of end-to-end delays are achieved. This research not only contributes to efficient storage allocation techniques but also pioneers the integration of reinforcement learning for wireless communication network performance optimization. The proposed framework signifies a paradigm shift in storage management, offering adaptability, efficiency, and real-time optimization to tackle the evolving challenges of wireless communication.
Effective Strategies for Automatic Analysis of Acoustic Signals in Long-Term Monitoring
Hydrophones used in Passive Acoustic Monitoring generate vast amounts of data, with the storage requirements for raw signals dependent on the sampling frequency, which limits the range of frequencies that can be recorded. Since the installation of these observatories is costly, it is crucial to maximize the utility of high-sampling-rate recordings to expand the range of survey types. However, storing these large datasets for long-term trend analysis presents significant challenges. This paper proposes an approach that reduces the data storage requirements by up to 85% while preserving critical information about Power Spectral Density and Sound Pressure Level. The strategy involves generating these key metrics from spectrograms, enabling both short-term (micro) and long-term (macro) studies. A proposal for efficient data processing is presented, structured in three steps: the first focuses on generating key metrics to replace space-consuming raw signals, the second addresses the treatment of these metrics for long-term studies, and the third outlines the creation of event detectors from the processed metrics. A comprehensive overview of the essential features for analyzing acoustic signals is provided, along with considerations for the future design of marine observatories. The necessary calculations and processes are detailed, demonstrating the potential of these methods to address the current data storage and processing limitations in long-term acoustic monitoring.
Data Storage Optimization Model Based on Improved Simulated Annealing Algorithm
Since there is a longitudinal and horizontal penetration problem between multi-level data centers in the smart grid information transmission network. Based on the improved Simulated Annealing algorithm, this paper proposes a data storage optimization model for smart grids based on Hadoop architecture. Combining the characteristics of distributed storage in cloud computing, the smart grid data are equivalent to a task-oriented data set. The smart grid information platform is flattened, equal to a collection of multiple distributed data centers. The smart grid data over time were counted to derive the dependencies between task sets and data sets. According to the dependency between task sets and data sets, the mathematical model was established in combination with the actual data transmission of the power grid. The optimal transmission correspondence between each data set and the data center was calculated. An improved Simulated Annealing algorithm solves the longitudinal and horizontal penetration problem between multi-level data centers. When generating a new solution, the Grey Wolf algorithm provides direction for finding the optimal solution. This paper integrated the existing business data and computational storage resources in the smart grid to establish a mathematical model of the affiliation between data centers and data sets. The optimal distribution of the data set was calculated, and the optimally distributed data set was stored in a distributed physical disk. Arithmetic examples were used to analyze the efficiency and stability of several algorithms to verify the improved algorithm’s advantages, and the improved algorithms’ effectiveness was confirmed by simulation.
Enhancing Efficiency in Transportation Data Storage for Electric Vehicles: The Synergy of Graph and Time-Series Databases
This article introduces a novel hybrid database architecture that combines graph and time-series databases to enhance the storage and management of transportation data, particularly for electric vehicles (EVs). This model addresses a critical challenge in modern mobility: handling large-scale, high-velocity, and highly interconnected datasets while maintaining query efficiency and scalability. By comparing a naive graph-only approach with our hybrid solution, we demonstrate a significant reduction in query response times for large data contexts-up to 64% faster in the XL scenario. The scientific contribution of this research lies in its practical implementation of a dual-layer storage framework that aligns with FAIR data principles and real-time mobility needs. Moreover, the hybrid model supports complex analytics, such as EV battery health monitoring, dynamic route optimization, and charging behavior analysis. These capabilities offer a multiplier effect, enabling broader applications across urban mobility systems, fleet management platforms, and energy-aware transport planning. By explicitly considering the interconnected nature of transport and energy data, this work contributes to both carbon emission reduction and smart city efficiency on a global scale.
Lossless Compression with Trie-Based Shared Dictionary for Omics Data in Edge–Cloud Frameworks
The growing complexity and volume of genomic and omics data present critical challenges for storage, transfer, and analysis in edge–cloud platforms. Existing compression techniques often involve trade-offs between efficiency and speed, requiring innovative approaches that ensure scalability and cost-effectiveness. This paper introduces a lossless compression method that integrates Trie-based shared dictionaries within an edge–cloud architecture. It presents a software-centric scientific research process of the design and evaluation of the proposed compression method. By enabling localized preprocessing at the edge, our approach reduces data redundancy before cloud transmission, thereby optimizing both storage and network efficiency. A global shared dictionary is constructed using N-gram analysis to identify and prioritize repeated sequences across multiple files. A lightweight index derived from this dictionary is then pushed to edge nodes, where Trie-based sequence replacement is applied to eliminate redundancy locally. The preprocessed data are subsequently transmitted to the cloud, where advanced compression algorithms, such as Zstd, GZIP, Snappy, and LZ4, further compress them. Evaluation on real patient omics datasets from B-cell Acute Lymphoblastic Leukemia (B-ALL) and Chronic Lymphocytic Leukemia (CLL) demonstrates that edge preprocessing significantly improves compression ratios, reduces upload times, and enhances scalability in hybrid cloud frameworks.
Research on optimization of community mass data storage based on HBASE
Based on the characteristics of HBase, data in the table is automatically sorted according to Rowkey, so in the organization of the massive data from community, add a timestamp to the storage structure in order to speed up queries, but HBase region split causes a defect that HBase load imbalance. In view of the above problems, this paper presents the design ideas of pre-partitioning and hash. in advance, according to the data characteristics, the cluster is divided into several regions, then through Rowkey hash mapping data is stored evenly to each partition. The data is stored equal probability to each region can not only solve the problem that a single node overload and some nodes waste of resources, but also avoid pressure on single-node query. Practice shows that the pre-partitioning and hash storage mechanisms can effectively optimize the problem that HBase load imbalance, caused by the storage of the massive data from community.
A Review of Hybrid Renewable Energy Systems: Architectures, Battery Systems, and Optimization Techniques
This paper aims to perform a literature review and statistical analysis based on data extracted from 38 articles published between 2018 and 2023 that address hybrid renewable energy systems. The main objective of this review has been to create a bibliographic database that organizes the content of the articles in different categories, such as system architecture, energy storage systems, auxiliary generation components used, and software employed, in addition to showing the algorithms and economic and reliability criteria for the optimization of these systems. In total, 38 articles have been analyzed, compared, and classified to provide an overview of the current status of simulation and optimization projects for hybrid renewable energy systems, highlighting clearly and appropriately the relevant trends and conclusions. A list of review articles has also been provided, which cover the aspects required for understanding HRESs.
A Joint Resource Allocation, Security with Efficient Task Scheduling in Cloud Computing Using Hybrid Machine Learning Techniques
The rapid growth of cloud computing environment with many clients ranging from personal users to big corporate or business houses has become a challenge for cloud organizations to handle the massive volume of data and various resources in the cloud. Inefficient management of resources can degrade the performance of cloud computing. Therefore, resources must be evenly allocated to different stakeholders without compromising the organization’s profit as well as users’ satisfaction. A customer’s request cannot be withheld indefinitely just because the fundamental resources are not free on the board. In this paper, a combined resource allocation security with efficient task scheduling in cloud computing using a hybrid machine learning (RATS-HM) technique is proposed to overcome those problems. The proposed RATS-HM techniques are given as follows: First, an improved cat swarm optimization algorithm-based short scheduler for task scheduling (ICS-TS) minimizes the make-span time and maximizes throughput. Second, a group optimization-based deep neural network (GO-DNN) for efficient resource allocation using different design constraints includes bandwidth and resource load. Third, a lightweight authentication scheme, i.e., NSUPREME is proposed for data encryption to provide security to data storage. Finally, the proposed RATS-HM technique is simulated with a different simulation setup, and the results are compared with state-of-art techniques to prove the effectiveness. The results regarding resource utilization, energy consumption, response time, etc., show that the proposed technique is superior to the existing one.