OMP support in graph-tool

Quick question: which functions/methods support OMP in graph-tool?

Whenever a function supports OpenMP, it is said in the documentation. For example, the docstring of pagerank() says:

If enabled during compilation, this algorithm runs in parallel.

If no statement of this kind exists, then the function runs only serially.

1 Like

So, for example, nothing that deals with MCMC has support for parallel execution. I know I knocked on this once, when I was asking for GPU support. You answered that the MCMC part would be quite difficult to parallelize. I was looking around and stumbled into this

https://martiningram.github.io/mcmc-comparison/

Do you think that something similar could be introduced in graph-tool?

This is a difficult problem. MCMC parallelization only works cleanly when the posterior factorizes. Otherwise, there is a shared global state, and trivial parallelization becomes impossible. There are possible heuristic ways to move forward, but no clear solution. This is something that is always on my mind, but I currently don’t know yet how to solve well.

1 Like

Apparently ChatGPT says that GPU support is there, but you have to compile with CUDA enabled:

# Load the graph
g = gt.load_graph("path/to/graph.xml.gz")

# Define the nested block state
state = gt.NestedBlockState(g)

# Define the number of levels and blocks for each level
state.set_levels(2)
state.set_bs([5, 10])

# Define the model parameters
params = dict(am=gt.ARPackMaslov(), state=state)

# Run the inference on GPU
gt.mcmc_equilibrate(params, niter=5, mcmc_args=dict(niter=1000, parallel=True, n_jobs=-1, log_prob=gt.mcmc_nested_blockmodel_log_prob,
                                                   optimize_args=dict(n_part_init=100, n_iter_init=100, n_iter_optimize=1000,
                                                                      parallel=True, n_jobs=-1,
                                                                      force_n_iter=True,
                                                                      device='gpu')))

# Get the inferred block membership
b = state.get_levels()[1].get_blocks()

It’s all hallucinatory garbage. Never trust these stochastic lie generators.

Ahahah, I don’t!
It was fun, though