You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
High Quality Decomposition
This is the org for open source frameworks to compute high quality decompositions fast.
We develop scalable algorithms for graph decomposition: partitioning, clustering, cuts, process mapping, and more.
What We Work On
Our research covers a broad range of graph decomposition problems:
Graph Partitioning -- divide a graph into k balanced blocks while minimizing the edges cut between them. NP-hard and central to parallel computing, VLSI design, scientific simulations, sparse matrix factorization, and route planning. Our tools range from high-quality multilevel methods with evolutionary search, to shared-memory and distributed-memory parallel partitioners, to streaming algorithms for graphs that do not fit in memory.
Graph Cuts -- find minimum and maximum edge cuts in undirected graphs. Applications include network reliability, connectivity analysis, and image segmentation. We provide shared-memory parallel algorithms for exact and inexact minimum cuts, cactus representations of all minimum cuts, and multiterminal cuts.
Graph Clustering -- identify densely connected communities without requiring a predefined number of clusters. Our tools optimize modularity, support motif-based local clustering, correlation clustering on signed graphs, and streaming approaches for large-scale inputs.
Process Mapping -- assign communicating processes to hardware topologies so that communication cost is minimized. Closely related to graph partitioning, but accounts for hierarchical network distances.
Hypergraph Partitioning -- partition hypergraphs where edges can connect more than two vertices. We provide FREIGHT, a fast streaming hypergraph partitioner. For high-quality hypergraph partitioning, see our sister organization KaHyPar.