Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
6,290 result(s) for "Graph computing"
Sort by:
A Doctor Recommendation Based on Graph Computing and LDA Topic Model
Doctor recommendation technology can help patients filter out large number of irrelevant doctors and find doctors who meet their actual needs quickly and accurately, helping patients gain access to helpful personalized online healthcare services. To address the problems with the existing recommendation methods, this paper proposes a hybrid doctor recommendation model based on online healthcare platform, which utilizes the word2vec model, latent Dirichlet allocation (LDA) topic model, and other methods to find doctors who best suit patients' needs with the information obtained from consultations between doctors and patients. Then, the model treats these doctors as nodes in order to construct a doctor tag cooccurrence network and recommends the most important doctors in the network via an eigenvector centrality calculation model on the graph. This method identifies the important nodes in the entire effective doctor network to support the recommendation from a new graph computing perspective. An experiment conducted on the Chinese healthcare website Chunyuyisheng.com proves that the proposed method a good recommendation performance.
Ingress: an automated incremental graph processing system
The graph data keep growing over time in real life. The ever-growing amount of dynamic graph data demands efficient techniques of incremental graph computation. However, incremental graph algorithms are challenging to develop. Existing approaches usually require users to manually design nontrivial incremental operators, or choose different memoization strategies for certain specific types of computation, limiting the usability and generality. In light of these challenges, we propose Ingress , an automated system for in cremental gr aph proc ess ing . Ingress is able to deduce the incremental counterpart of a batch vertex-centric algorithm, without the need of redesigned logic or data structures from users. Underlying Ingress is an automated incrementalization framework equipped with four different memoization policies, to support all kinds of vertex-centric computations with optimized memory utilization. We identify sufficient conditions for the applicability of these policies. Ingress chooses the best-fit policy for a given algorithm automatically by verifying these conditions. In addition to the ease-of-use and generalization, Ingress outperforms state-of-the-art incremental graph systems by 12.14 × on average (up to 49.23 × ) in efficiency.
PECC: parallel expansion based on clustering coefficient for efficient graph partitioning
In the pursuit of graph processing performance, graph partitioning, as a crucial preprocessing step, has been widely concerned. Based on an in-depth analysis of Neighbor Expansion (NE) graph partitioning algorithm, we propose Parallel Expansion based on Clustering Coefficient (PECC). Firstly, to address the partition disturbance caused by internal structural changes during the process of vertex neighborhood expansion in the traditional NE algorithm, we perform a formal redefinition of the vertex state during the partitioning process and introduce the concept of clustering coefficient. Then, PECC uses the clustering coefficient as a metric to measure the closeness between vertices and potential partitions. Based on this metric, a novel parallel partitioning strategy in the distributed environment is proposed. This strategy consists of two core steps: the expansion process and the allocation process. Through two steps, PECC can effectively improve the operating efficiency of programs and significantly reduce the partitioning time. In addition, to ensure data consistency during parallel expansion, we adopt a distributed locking engine to solve concurrency management problems. Our evaluations on large real-world graphs show that in many cases, PECC achieves a balance between partitioning quality and computational efficiency. Finally, we show that PECC integrated on GraphX outperforms the built-in native algorithms.
Exact values for three domination-like problems in circular and infinite grid graphs of small height
In this paper we study three domination-like problems, namely identifying codes, locating-dominating codes, and locating-total-dominating codes. We are interested in finding the minimum cardinality of such codes in circular and infinite grid graphs of given height. We provide an alternate proof for already known results, as well as new results. These were obtained by a computer search based on a generic framework, that we developed earlier, for the search of a minimum labeling satisfying a pseudo-d-local property in rotagraphs.
Grapher: A Reconfigurable Graph Computing Accelerator with Optimized Processing Elements
In recent years, various graph computing architectures have been proposed to process graph data that represent complex dependencies between different objects in the world. The designs of the processing element (PE) in traditional graph computing accelerators are often optimized for specific graph algorithms or tasks, which limits their flexibility in processing different types of graph algorithms, or the parallel configuration that can be supported by their PE arrays is inefficient. To achieve both flexibility and efficiency, this paper proposes Grapher, a reconfigurable graph computing accelerator based on an optimized PE array, efficiently supporting multiple graph algorithms, enhancing parallel computation, and improving adaptability and system performance through dynamic hardware resource configuration. To verify the performance of Grapher, this paper selected six datasets from the Stanford Network Analysis Project (SNAP) database for testing. Compared with the existing typical graph frameworks Ligra, Gemini, and GraphBIG, the processing time for the six datasets using the BFS, CC, and PR algorithms was reduced by up to 39.31%, 35.43%, and 27.67%, respectively. The energy efficiency has also been improved by 1.8× compared to Hitgraph and 4.7× compared to ThunderGP.
Engineering Bi-Connected Component Overlay for Maximum-Flow Parallel Acceleration in Large Sparse Graph
Network maximum flow problem is important and basic in graph theory, and one of its research directions is maximum-flow acceleration in large-scale graph. Existing acceleration strategy includes graph contraction and parallel computation, where there is still room for improvement:(1) The existing two acceleration strategies are not fully integrated, leading to their limited acceleration effect; (2) There is no sufficient support for computing multiple maximum-flow in one graph, leading to a lot of redundant computation. (3)The existing preprocessing methods need to consider node degrees and capacity constraints, resulting in high computational complexity. To address above problems, we identify the bi-connected components in a given graph and build an overlay, which can help split the maximum-flow problem into several subproblems and then solve them in parallel. The algorithm only uses the connectivity in the graph and has low complexity. The analyses and experiments on benchmark graphs indicate that the method can significantly shorten the calculation time in large sparse graphs. 最大流问题是图论中重要的基础性问题,大规模网络中的最大流加速已成为重要研究方向,已有工作包括并行计算加速和图缩减加速2种思路,但仍有较大改进空间:①图缩减和并行计算2种加速思路并未充分融合,导致各自加速效果受限;②已有加速算法对常见的多次最大流求解支持不足,导致多次计算间存在大量冗余工作;③已有加速算法往往需涉及出入度和边容量等多个条件,计算复杂度偏高。针对上述问题,提出了一种基于优化子图的最大流并行加速方法,通过识别原始大图的双连通分量并建立覆盖图,可将任意最大流问题分解为独立的子问题,并行求解快速获取最大流精确解;覆盖图的构建仅涉及节点之间连接关系,具较低的时间复杂度。在基准图上的测试结果表明,算法可显著缩短稀疏大图中最大流计算时间。
A Local Approximation Approach for Processing Time-Evolving Graphs
To efficiently process time-evolving graphs where new vertices and edges are inserted over time, an incremental computing model, which processes the newly-constructed graph based on the results of the computation on the outdated graph, is widely adopted in distributed time-evolving graph computing systems. In this paper, we first experimentally study how the results of the graph computation on the local graph structure can approximate the results of the graph computation on the complete graph structure in distributed environments. Then, we develop an optimization approach to reduce the response time in bulk synchronous parallel (BSP)-based incremental computing systems by processing time-evolving graphs on the local graph structure instead of on the complete graph structure. We have evaluated our optimization approach using the graph algorithms single-source shortest path (SSSP) and PageRankon the Amazon Elastic Compute Cloud(EC2), a central part of Amazon.com’s cloud-computing platform, with different scales of graph datasets. The experimental results demonstrate that the local approximation approach can reduce the response time for the SSSP algorithm by 22% and reduce the response time for the PageRank algorithm by 7% on average compared to the existing incremental computing framework of GraphTau.
Graph computing based security constrained unit commitment in hydro-thermal power systems incorporating pumped hydro storage
This paper proposes a graph computing based mixed integer programming (MIP) framework for solving the security constrained unit commitment (SCUC) problem in hydro-thermal power systems incorporating pumped hydro storage (PHS). The proposed graph computing-based MIP framework considers the economic operations of thermal units, cascade hydropower stations and PHS stations, as well as their technical impacts towards the network security. First, the hydro-thermal power system data and unit information are stored in a graph structure with nodes and edges, which enables nodal and hierarchical parallel computing for the unit commitment (UC) solution calculation and network security analysis. A MIP model is then formulated to solve the SCUC problem with the mathematical models of thermal units, cascade hydropower stations and PHS stations. In addition, two optimization approaches including convex hull reformulation (CHR) and special ordered set (SOS) methods are introduced for speeding up the MIP calculation procedure. To ensure the system stability under the derived UC solution, a parallelized graph power flow (PGPF) algorithm is proposed for the hydro-thermal power system network security analysis. Finally, case studies of the IEEE 118-bus system and a practical 2749-bus hydro-thermal power system are introduced to demonstrate the feasibility and validity of the proposed graph computing-based MIP framework.
OneGraph: a cross-architecture framework for large-scale graph computing on GPUs based on oneAPI
The explosive growth of graph data sets has led to an increase in the computing power and storage resources required for graph computing. To handle large-scale graph processing, heterogeneous platforms have become necessary to provide sufficient computing power and storage. The most popular scheme for this is the CPU-GPU architecture. However, the steep learning curve and complex concurrency control for heterogeneous platforms pose a challenge for developers. Additionally, GPUs from different vendors have varying software stacks, making cross-platform porting and verification challenging. Recently, Intel proposed a unified programming model to manage multiple heterogeneous devices at the same time, named oneAPI. It provides a more friendly programming model for simple C++ developers and a convenient concurrency control scheme, allowing managing different vendors of devices at the same time. Hence there is an opportunity to utilize oneAPI to design a general cross-architecture framework for large-scale graph computing. In this paper, we propose a large-scale graph computing framework for multiple types of accelerators with Intel oneAPI and we name it as OneGraph. Our approach significantly reduces the data transfer between GPU and CPU and masks the latency by asynchronous transfer, which significantly improves performance. We conducted rigorous performance tests on the framework using four classical graph algorithms. The experiment results show that our approach achieves an average speedup of 3.3x over the state-of-the-art partitioning-based approaches. Moreover, thanks to the cross-architecture model of Intel oneAPI, the framework can be deployed on different GPU platforms without code modification. And our evaluation proves that OneGraph has only less than 1% performance loss compared to the dedicated programming model on GPUs in large-scale graph computing.
ZenLDA: Large-scale topic model training on distributed data-parallel platform
Recently, topic models such as Latent Dirichlet Allocation (LDA) have been widely used in large-scale web mining. Many large-scale LDA training systems have been developed, which usually prefer a customized design from top to bottom with sophisticated synchronization support. We propose an LDA training system named ZenLDA, which follows a generalized design for the distributed data-parallel platform. The novelty of ZenLDA consists of three main aspects: (1) it converts the commonly used serial Collapsed Gibbs Sampling (CGS) inference algorithm to a Monte-Carlo Collapsed Bayesian (MCCB) estimation method, which is embarrassingly parallel; (2) it decomposes the LDA inference formula into parts that can be sampled more efficiently to reduce computation complexity; (3) it proposes a distributed LDA training framework, which represents the corpus as a directed graph with the parameters annotated as corresponding vertices and implements ZenLDA and other well-known inference methods based on Spark. Experimental results indicate that MCCB converges with accuracy similar to that of CGS, while running much faster. On top of MCCB, the ZenLDA formula decomposition achieved the fastest speed among other well-known inference methods. ZenLDA also showed good scalability when dealing with large-scale topic models on the data-parallel platform. Overall, ZenLDA could achieve comparable and even better computing performance with state-of-the-art dedicated systems.