DOLFIN-X
DOLFIN-X C++ interface
Functions
dolfinx::graph::Partitioning Namespace Reference

Tools for distributed graphs. More...

Functions

std::tuple< std::vector< std::int32_t >, std::vector< std::int64_t >, std::vector< int > > reorder_global_indices (MPI_Comm comm, const std::vector< std::int64_t > &global_indices, const std::vector< bool > &shared_indices)
 
std::pair< graph::AdjacencyList< std::int32_t >, std::vector< std::int64_t > > create_local_adjacency_list (const graph::AdjacencyList< std::int64_t > &list)
 Compute a local AdjacencyList list with contiguous indices from an AdjacencyList that may have non-contiguous data. More...
 
std::tuple< graph::AdjacencyList< std::int32_t >, common::IndexMapcreate_distributed_adjacency_list (MPI_Comm comm, const graph::AdjacencyList< std::int32_t > &list_local, const std::vector< std::int64_t > &local_to_global_links, const std::vector< bool > &shared_links)
 Build a distributed AdjacencyList list with re-numbered links from an AdjacencyList that may have non-contiguous data. The distribution of the AdjacencyList nodes is unchanged. More...
 
std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > distribute (MPI_Comm comm, const graph::AdjacencyList< std::int64_t > &list, const graph::AdjacencyList< std::int32_t > &destinations)
 Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank. More...
 
std::vector< std::int64_t > compute_ghost_indices (MPI_Comm comm, const std::vector< std::int64_t > &global_indices, const std::vector< int > &ghost_owners)
 Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known. More...
 
template<typename T >
Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > distribute_data (MPI_Comm comm, const std::vector< std::int64_t > &indices, const Eigen::Ref< const Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor >> &x)
 Distribute data to process ranks where it it required. More...
 
std::vector< std::int64_t > compute_local_to_global_links (const graph::AdjacencyList< std::int64_t > &global, const graph::AdjacencyList< std::int32_t > &local)
 Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape. More...
 
std::vector< std::int32_t > compute_local_to_local (const std::vector< std::int64_t > &local0_to_global, const std::vector< std::int64_t > &local1_to_global)
 Compute a local0-to-local1 map from two local-to-global maps with common global indices. More...
 

Detailed Description

Tools for distributed graphs.

TODO: Add a function that sends data (Eigen arrays) to the 'owner'

Function Documentation

◆ compute_ghost_indices()

std::vector< std::int64_t > dolfinx::graph::Partitioning::compute_ghost_indices ( MPI_Comm  comm,
const std::vector< std::int64_t > &  global_indices,
const std::vector< int > &  ghost_owners 
)

Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known.

Parameters
[in]commMPI communicator
[in]global_indicesList of arbitrary global indices, ghosts at end
[in]ghost_ownersList of owning processes of the ghost indices
Returns
Indexing of ghosts in a global space starting from 0 on process 0

◆ compute_local_to_global_links()

std::vector< std::int64_t > dolfinx::graph::Partitioning::compute_local_to_global_links ( const graph::AdjacencyList< std::int64_t > &  global,
const graph::AdjacencyList< std::int32_t > &  local 
)

Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape.

Parameters
[in]globalAdjacency list with global link indices
[in]localAdjacency list with local, contiguous link indices
Returns
Map from local index to global index, which if applied to the local adjacency list indices would yield the global adjacency list

◆ compute_local_to_local()

std::vector< std::int32_t > dolfinx::graph::Partitioning::compute_local_to_local ( const std::vector< std::int64_t > &  local0_to_global,
const std::vector< std::int64_t > &  local1_to_global 
)

Compute a local0-to-local1 map from two local-to-global maps with common global indices.

Parameters
[in]local0_to_globalMap from local0 indices to global indices
[in]local1_to_globalMap from local1 indices to global indices
Returns
Map from local0 indices to local1 indices

◆ create_distributed_adjacency_list()

std::tuple< graph::AdjacencyList< std::int32_t >, common::IndexMap > dolfinx::graph::Partitioning::create_distributed_adjacency_list ( MPI_Comm  comm,
const graph::AdjacencyList< std::int32_t > &  list_local,
const std::vector< std::int64_t > &  local_to_global_links,
const std::vector< bool > &  shared_links 
)

Build a distributed AdjacencyList list with re-numbered links from an AdjacencyList that may have non-contiguous data. The distribution of the AdjacencyList nodes is unchanged.

Parameters
[in]commMPI communicator
[in]list_localLocal adjacency list, with contiguous link indices
[in]local_to_global_linksLocal-to-global map for links in the local adjacency list
[in]shared_linksTry for possible shared links

◆ create_local_adjacency_list()

std::pair< graph::AdjacencyList< std::int32_t >, std::vector< std::int64_t > > dolfinx::graph::Partitioning::create_local_adjacency_list ( const graph::AdjacencyList< std::int64_t > &  list)

Compute a local AdjacencyList list with contiguous indices from an AdjacencyList that may have non-contiguous data.

Parameters
[in]listAdjacency list with links that might not have contiguous numdering
Returns
Adjacency list with contiguous ordering [0, 1, ..., n), and a map from local indices in the returned Adjacency list to the global indices in list

◆ distribute()

std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > dolfinx::graph::Partitioning::distribute ( MPI_Comm  comm,
const graph::AdjacencyList< std::int64_t > &  list,
const graph::AdjacencyList< std::int32_t > &  destinations 
)

Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank.

Parameters
[in]commMPI Communicator
[in]listThe adjacency list to distribute
[in]destinationsDestination ranks for the ith node in the adjacency list
Returns
Adjacency list for this process, array of source ranks for each node in the adjacency list, and the original global index for each node.

◆ distribute_data()

template<typename T >
Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > dolfinx::graph::Partitioning::distribute_data ( MPI_Comm  comm,
const std::vector< std::int64_t > &  indices,
const Eigen::Ref< const Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor >> &  x 
)

Distribute data to process ranks where it it required.

Parameters
[in]commThe MPI communicator
[in]indicesGlobal indices of the data required by this process
[in]xData on this process which may be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank
Returns
The data for each index in indices

◆ reorder_global_indices()

std::tuple< std::vector< std::int32_t >, std::vector< std::int64_t >, std::vector< int > > dolfinx::graph::Partitioning::reorder_global_indices ( MPI_Comm  comm,
const std::vector< std::int64_t > &  global_indices,
const std::vector< bool > &  shared_indices 
)
Todo:
Return the list of neighbor processes which is computed internally

Compute new, contiguous global indices from a collection of global, possibly globally non-contiguous, indices and assign process ownership to the new global indices such that the global index of owned indices increases with increasing MPI rank.

Parameters
[in]commThe communicator across which the indices are distributed
[in]global_indicesGlobal indices on this process. Some global indices may also be on other processes
[in]shared_indicesVector that is true for indices that may also be in other process. Size is the same as global_indices.
Returns
{Local (old, from local_to_global) -> local (new) indices, global indices for ghosts of this process}. The new indices are [0, ..., N), with [0, ..., n0) being owned. The new global index for an owned index is n_global = n + offset, where offset is computed from a process scan. Indices [n0, ..., N) are owned by a remote process and the ghosts return vector maps [n0, ..., N) to global indices.