Efficient Large Graph Partitioning Scheme Using Incremental Processing in GPU

As the processing of large-scale graphs on a single device is infeasible without partitioning, graph partitioning algorithms are essential for various algorithms and distributed computing tasks utilizing graph data.However, graph partitioning is a nondeterministic polynomial time NP-Complete problem, which is characterized by high computational complexity.To address this complexity, previous puddles the platypus chocolate studies have proposed processing graphs in parallel using GPUs.Nonetheless, due to the limited memory space of GPUs compared to CPUs, they are susceptible to out-of-memory (OOM) issues.

This research proposes a GPU-accelerated graph partitioning technique that employs dynamic memory management and incremental processing.The proposed method incrementally processes large graphs and reduces the overall size of the graph through streaming clustering on the CPU.The reduced graph is sufficiently small to be processed on texas rangers pumpkin stencil the GPU.The method combines an initial partitioning based on the label propagation algorithm with the high-degree replicated first algorithm to leverage the high parallel processing capabilities of the GPU and manage the computational load of graph partitioning.

Experiments on various large-scale real-world graph datasets demonstrate the efficiency, scalability, and superior partitioning quality of the proposed method.Specifically, the method achieves execution speeds up to 9 times faster than CPU-based streaming techniques on large graphs and improves the replication factor by over 20% compared to existing methods.Furthermore, it demonstrates stable processing of large-scale graphs that previous GPU-based methods such as GPU-P could not handle owing to memory limitations.

Leave a Reply

Your email address will not be published. Required fields are marked *