DOLFIN-X
DOLFIN-X C++ interface
|
Tools for distributed graphs. More...
Functions | |
std::tuple< std::vector< std::int32_t >, std::vector< std::int64_t >, std::vector< int > > | reorder_global_indices (MPI_Comm comm, const std::vector< std::int64_t > &global_indices, const std::vector< bool > &shared_indices) |
std::pair< graph::AdjacencyList< std::int32_t >, std::vector< std::int64_t > > | create_local_adjacency_list (const graph::AdjacencyList< std::int64_t > &list) |
Compute a local AdjacencyList list with contiguous indices from an AdjacencyList that may have non-contiguous data. More... | |
std::tuple< graph::AdjacencyList< std::int32_t >, common::IndexMap > | create_distributed_adjacency_list (MPI_Comm comm, const graph::AdjacencyList< std::int32_t > &list_local, const std::vector< std::int64_t > &local_to_global_links, const std::vector< bool > &shared_links) |
Build a distributed AdjacencyList list with re-numbered links from an AdjacencyList that may have non-contiguous data. The distribution of the AdjacencyList nodes is unchanged. More... | |
std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > | distribute (MPI_Comm comm, const graph::AdjacencyList< std::int64_t > &list, const graph::AdjacencyList< std::int32_t > &destinations) |
Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank. More... | |
std::vector< std::int64_t > | compute_ghost_indices (MPI_Comm comm, const std::vector< std::int64_t > &global_indices, const std::vector< int > &ghost_owners) |
Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known. More... | |
template<typename T > | |
Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > | distribute_data (MPI_Comm comm, const std::vector< std::int64_t > &indices, const Eigen::Ref< const Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor >> &x) |
Distribute data to process ranks where it it required. More... | |
std::vector< std::int64_t > | compute_local_to_global_links (const graph::AdjacencyList< std::int64_t > &global, const graph::AdjacencyList< std::int32_t > &local) |
Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape. More... | |
std::vector< std::int32_t > | compute_local_to_local (const std::vector< std::int64_t > &local0_to_global, const std::vector< std::int64_t > &local1_to_global) |
Compute a local0-to-local1 map from two local-to-global maps with common global indices. More... | |
Tools for distributed graphs.
TODO: Add a function that sends data (Eigen arrays) to the 'owner'
std::vector< std::int64_t > dolfinx::graph::Partitioning::compute_ghost_indices | ( | MPI_Comm | comm, |
const std::vector< std::int64_t > & | global_indices, | ||
const std::vector< int > & | ghost_owners | ||
) |
Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known.
[in] | comm | MPI communicator |
[in] | global_indices | List of arbitrary global indices, ghosts at end |
[in] | ghost_owners | List of owning processes of the ghost indices |
std::vector< std::int64_t > dolfinx::graph::Partitioning::compute_local_to_global_links | ( | const graph::AdjacencyList< std::int64_t > & | global, |
const graph::AdjacencyList< std::int32_t > & | local | ||
) |
Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape.
[in] | global | Adjacency list with global link indices |
[in] | local | Adjacency list with local, contiguous link indices |
std::vector< std::int32_t > dolfinx::graph::Partitioning::compute_local_to_local | ( | const std::vector< std::int64_t > & | local0_to_global, |
const std::vector< std::int64_t > & | local1_to_global | ||
) |
Compute a local0-to-local1 map from two local-to-global maps with common global indices.
[in] | local0_to_global | Map from local0 indices to global indices |
[in] | local1_to_global | Map from local1 indices to global indices |
std::tuple< graph::AdjacencyList< std::int32_t >, common::IndexMap > dolfinx::graph::Partitioning::create_distributed_adjacency_list | ( | MPI_Comm | comm, |
const graph::AdjacencyList< std::int32_t > & | list_local, | ||
const std::vector< std::int64_t > & | local_to_global_links, | ||
const std::vector< bool > & | shared_links | ||
) |
Build a distributed AdjacencyList list with re-numbered links from an AdjacencyList that may have non-contiguous data. The distribution of the AdjacencyList nodes is unchanged.
[in] | comm | MPI communicator |
[in] | list_local | Local adjacency list, with contiguous link indices |
[in] | local_to_global_links | Local-to-global map for links in the local adjacency list |
[in] | shared_links | Try for possible shared links |
std::pair< graph::AdjacencyList< std::int32_t >, std::vector< std::int64_t > > dolfinx::graph::Partitioning::create_local_adjacency_list | ( | const graph::AdjacencyList< std::int64_t > & | list | ) |
Compute a local AdjacencyList list with contiguous indices from an AdjacencyList that may have non-contiguous data.
[in] | list | Adjacency list with links that might not have contiguous numdering |
list
std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > dolfinx::graph::Partitioning::distribute | ( | MPI_Comm | comm, |
const graph::AdjacencyList< std::int64_t > & | list, | ||
const graph::AdjacencyList< std::int32_t > & | destinations | ||
) |
Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank.
[in] | comm | MPI Communicator |
[in] | list | The adjacency list to distribute |
[in] | destinations | Destination ranks for the ith node in the adjacency list |
Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > dolfinx::graph::Partitioning::distribute_data | ( | MPI_Comm | comm, |
const std::vector< std::int64_t > & | indices, | ||
const Eigen::Ref< const Eigen::Array< T, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor >> & | x | ||
) |
Distribute data to process ranks where it it required.
[in] | comm | The MPI communicator |
[in] | indices | Global indices of the data required by this process |
[in] | x | Data on this process which may be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank |
indices
std::tuple< std::vector< std::int32_t >, std::vector< std::int64_t >, std::vector< int > > dolfinx::graph::Partitioning::reorder_global_indices | ( | MPI_Comm | comm, |
const std::vector< std::int64_t > & | global_indices, | ||
const std::vector< bool > & | shared_indices | ||
) |
Compute new, contiguous global indices from a collection of global, possibly globally non-contiguous, indices and assign process ownership to the new global indices such that the global index of owned indices increases with increasing MPI rank.
[in] | comm | The communicator across which the indices are distributed |
[in] | global_indices | Global indices on this process. Some global indices may also be on other processes |
[in] | shared_indices | Vector that is true for indices that may also be in other process. Size is the same as global_indices . |