I am trying to run "get_edges_probs" for a large number of edges on a

reasonably sized network. This would take too long to run on my local

machine so I am considering sending it off to a cluster. Speaking to the

admin I would need to split the job into several smaller jobs, each of which

I can then run on a different node.

My question is thus: If I split my list of edges into, say, 50 separate

lists and then set up 50 jobs, each of which processes one of these lists,

am I able to compare the results of these jobs to each other by simply

recombining the resulting set of loglikelihoods to then calculate my

likelihood ratios as outlined in the cookbook? Or does the stochastic nature

of the algorithm mean that I can't necessarily compare results from

different runs to each other in this manner?

Thank you for any help in advance.

Best wishes,

Philipp