I am trying to reproduce the results of a colleague, who used graph-tool version 2.31 to fit a 3-block stochastic block model to a neural network. After initializing a graph, g, his code looks like this:
models = [gt.minimize_blockmodel_dl(g, deg_corr=True, B_min=3, B_max=3, mcmc_multilevel_args=dict(anneal=True)) for _ in range(10)]
best_model = models[np.argmin([model.entropy() for model in models])]
Things seem to have changed quite a bit between that version and the current version. My code for fitting the block model looks like this:
model = gt.minimize_blockmodel_dl(g,state_args=dict(deg_corr=True),multilevel_mcmc_args=dict(B_min=3,B_max=3))
But this lacks simulated annealing, which appears to be quite important, as his best entropy comes out around 9000, and my best comes out around 13000. I’ve tried adding simulated annealing as suggested in the documentation:
gt.mcmc_anneal(model, beta_range=(1, 10), niter=1000, mcmc_equilibrate_args=dict(force_niter=10))
But this doesn’t preserve the number of blocks, and I can’t find arguments to fix the number of blocks. I thought this code would accomplish something similar:
betas = np.linspace(1,10,1000)
for j in range(1000):
gt.mcmc_equilibrate(model,mcmc_args=dict(beta=betas[j],niter=10,d=0,pmerge=0,psplit=0))
But it takes so long to run, that I haven’t been able to run it all the way through. How can I implement the same model fit my colleague was performing in the current version of graph-tool?