DOLFIN-X
DOLFIN-X C++ interface
Static Public Member Functions | List of all members
dolfinx::graph::Partitioning Class Reference

Tools for distributed graphs. More...

#include <Partitioning.h>

Static Public Member Functions

static std::pair< std::vector< std::int32_t >, std::vector< std::int64_t > > reorder_global_indices (MPI_Comm comm, const std::vector< std::int64_t > &global_indices, const std::vector< bool > &shared_indices)
 
static std::pair< graph::AdjacencyList< std::int32_t >, std::vector< std::int64_t > > create_local_adjacency_list (const graph::AdjacencyList< std::int64_t > &list)
 Compute a local AdjacencyList list with contiguous indices from an AdjacencyList that may have non-contiguous data. More...
 
static std::tuple< graph::AdjacencyList< std::int32_t >, common::IndexMapcreate_distributed_adjacency_list (MPI_Comm comm, const graph::AdjacencyList< std::int32_t > &list_local, const std::vector< std::int64_t > &local_to_global_links, const std::vector< bool > &shared_links)
 Build a distributed AdjacencyList list with re-numbered links from an AdjacencyList that may have non-contiguous data. The distribution of the AdjacencyList nodes is unchanged. More...
 
static std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > distribute (MPI_Comm comm, const graph::AdjacencyList< std::int64_t > &list, const graph::AdjacencyList< std::int32_t > &destinations)
 Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank. More...
 
static std::vector< std::int64_t > compute_ghost_indices (MPI_Comm comm, const std::vector< std::int64_t > &global_indices, const std::vector< int > &ghost_owners)
 Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known. More...
 
template<typename T >
static Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > distribute_data (MPI_Comm comm, const std::vector< std::int64_t > &indices, const Eigen::Ref< const Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor >> &x)
 Distribute data to process ranks where it it required. More...
 
static std::vector< std::int64_t > compute_local_to_global_links (const graph::AdjacencyList< std::int64_t > &global, const graph::AdjacencyList< std::int32_t > &local)
 Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape. More...
 
static std::vector< std::int32_t > compute_local_to_local (const std::vector< std::int64_t > &local0_to_global, const std::vector< std::int64_t > &local1_to_global)
 Compute a local0-to-local1 map from two local-to-global maps with common global indices. More...
 

Detailed Description

Tools for distributed graphs.

TODO: Add a function that sends data (Eigen arrays) to the 'owner'

Member Function Documentation

◆ compute_ghost_indices()

std::vector< std::int64_t > Partitioning::compute_ghost_indices ( MPI_Comm  comm,
const std::vector< std::int64_t > &  global_indices,
const std::vector< int > &  ghost_owners 
)
static

Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known.

Parameters
[in]commMPI communicator
[in]global_indicesList of arbitrary global indices, ghosts at end
[in]ghost_ownersList of owning processes of the ghost indices
Returns
Indexing of ghosts in a global space starting from 0 on process 0

◆ compute_local_to_global_links()

std::vector< std::int64_t > Partitioning::compute_local_to_global_links ( const graph::AdjacencyList< std::int64_t > &  global,
const graph::AdjacencyList< std::int32_t > &  local 
)
static

Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape.

Parameters
[in]globalAdjacency list with global link indices
[in]localAdjacency list with local, contiguous link indices
Returns
Map from local index to global index, which if applied to the local adjacency list indices would yield the global adjacency list

◆ compute_local_to_local()

std::vector< std::int32_t > Partitioning::compute_local_to_local ( const std::vector< std::int64_t > &  local0_to_global,
const std::vector< std::int64_t > &  local1_to_global 
)
static

Compute a local0-to-local1 map from two local-to-global maps with common global indices.

Parameters
[in]local0_to_globalMap from local0 indices to global indices
[in]local1_to_globalMap from local1 indices to global indices
Returns
Map from local0 indices to local1 indices

◆ create_distributed_adjacency_list()

std::tuple< graph::AdjacencyList< std::int32_t >, common::IndexMap > Partitioning::create_distributed_adjacency_list ( MPI_Comm  comm,
const graph::AdjacencyList< std::int32_t > &  list_local,
const std::vector< std::int64_t > &  local_to_global_links,
const std::vector< bool > &  shared_links 
)
static

Build a distributed AdjacencyList list with re-numbered links from an AdjacencyList that may have non-contiguous data. The distribution of the AdjacencyList nodes is unchanged.

Parameters
[in]commMPI communicator
[in]list_localLocal adjacency list, with contiguous link indices
[in]local_to_global_linksLocal-to-global map for links in the local adjacency list
[in]shared_linksTry for possible shared links

◆ create_local_adjacency_list()

std::pair< graph::AdjacencyList< std::int32_t >, std::vector< std::int64_t > > Partitioning::create_local_adjacency_list ( const graph::AdjacencyList< std::int64_t > &  list)
static

Compute a local AdjacencyList list with contiguous indices from an AdjacencyList that may have non-contiguous data.

Parameters
[in]listAdjacency list with links that might not have contiguous numdering
Returns
Adjacency list with contiguous ordering [0, 1, ..., n), and a map from local indices in the returned Adjacency list to the global indices in list

◆ distribute()

std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > Partitioning::distribute ( MPI_Comm  comm,
const graph::AdjacencyList< std::int64_t > &  list,
const graph::AdjacencyList< std::int32_t > &  destinations 
)
static

Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank.

Parameters
[in]commMPI Communicator
[in]listThe adjacency list to distribute
[in]destinationsDestination ranks for the ith node in the adjacency list
Returns
Adjacency list for this process, array of source ranks for each node in the adjacency list, and the original global index for each node.

◆ distribute_data()

template<typename T >
Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > dolfinx::graph::Partitioning::distribute_data ( MPI_Comm  comm,
const std::vector< std::int64_t > &  indices,
const Eigen::Ref< const Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor >> &  x 
)
static

Distribute data to process ranks where it it required.

Parameters
[in]commThe MPI communicator
[in]indicesGlobal indices of the data required by this process
[in]xData on this process which may be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank
Returns
The data for each index in indices

◆ reorder_global_indices()

std::pair< std::vector< std::int32_t >, std::vector< std::int64_t > > Partitioning::reorder_global_indices ( MPI_Comm  comm,
const std::vector< std::int64_t > &  global_indices,
const std::vector< bool > &  shared_indices 
)
static
Todo:
Return the list of neighbour processes which is computed internally

Compute new, contiguous global indices from a collection of global, possibly globally non-contiguous, indices and assign process ownership to the new global indices such that the global index of owned indices increases with increasing MPI rank.

Parameters
[in]commThe communicator across which the indices are distributed
[in]global_indicesGlobal indices on this process. Some global indices may also be on other processes
[in]shared_indicesVector that is true for indices that may also be in other process. Size is the same as global_indices.
Returns
{Local (old, from local_to_global) -> local (new) indices, global indices for ghosts of this process}. The new indices are [0, ..., N), with [0, ..., n0) being owned. The new global index for an owned index is n_global = n + offset, where offset is computed from a process scan. Indices [n0, ..., N) are owned by a remote process and the ghosts return vector maps [n0, ..., N) to global indices.

The documentation for this class was generated from the following files: