# Simulated Annealing with fixed block count

I am trying to reproduce the results of a colleague, who used graph-tool version 2.31 to fit a 3-block stochastic block model to a neural network. After initializing a graph, g, his code looks like this:

``````models = [gt.minimize_blockmodel_dl(g, deg_corr=True, B_min=3, B_max=3, mcmc_multilevel_args=dict(anneal=True)) for _ in range(10)]
best_model = models[np.argmin([model.entropy() for model in models])]
``````

Things seem to have changed quite a bit between that version and the current version. My code for fitting the block model looks like this:

``````model = gt.minimize_blockmodel_dl(g,state_args=dict(deg_corr=True),multilevel_mcmc_args=dict(B_min=3,B_max=3))
``````

But this lacks simulated annealing, which appears to be quite important, as his best entropy comes out around 9000, and my best comes out around 13000. I’ve tried adding simulated annealing as suggested in the documentation:

``````gt.mcmc_anneal(model, beta_range=(1, 10), niter=1000, mcmc_equilibrate_args=dict(force_niter=10))
``````

But this doesn’t preserve the number of blocks, and I can’t find arguments to fix the number of blocks. I thought this code would accomplish something similar:

``````betas = np.linspace(1,10,1000)
for j in range(1000):
gt.mcmc_equilibrate(model,mcmc_args=dict(beta=betas[j],niter=10,d=0,pmerge=0,psplit=0))
``````

But it takes so long to run, that I haven’t been able to run it all the way through. How can I implement the same model fit my colleague was performing in the current version of graph-tool?

I doubt that annealing was responsible for such a large difference in description length, but in any case, to preserve the number of groups you should do:

``````gt.mcmc_anneal(model, beta_range=(1, 10), niter=1000,
mcmc_equilibrate_args=dict(force_niter=10,
mcmc_args=dict(pmerge=0,
psplit=0, d=0)))
``````

Which is similar to the loop you did in the end. To make it run faster you have to decrease `niter` or `force_niter`. But simulated annealing is slow in general…

But as I said, I don’t believe the difference you are seeing is due to annealing. But it’s hard to say without more concrete details.

You were correct that the bulk of the difference did not come from the annealing. Turns out most of the difference was due using weighted edges vs. unweighted edges. However, after going to unweighted edges, I still consistently come out with a description length that is about 400 higher. The code you posted for the annealing worked, but I found it made very little difference. Unless the anneal argument in the old version is doing something different that’s somehow much more effective, I’m now at a loss to see what’s different between this code in graph-tool version 2.31:

``````model = gt.minimize_blockmodel_dl(g, deg_corr=True, B_min=3, B_max=3, mcmc_multilevel_args=dict(anneal=True))
``````

and this code in graph-tool version 2.53:

``````model = gt.minimize_blockmodel_dl(g,state_args=dict(deg_corr=True),multilevel_mcmc_args=dict(B_min=3,B_max=3))
gt.mcmc_anneal(model, beta_range=(1, 10), niter=1000, mcmc_equilibrate_args=dict(force_niter=10, mcmc_args=dict(d=0,pmerge=0,psplit=0)))
``````

Are the partitions you find any different? If you save the state with one version, and load it with the other what description length you find? Is the network directed or undirected?

This debugging would have been easier if you provided us with a minimal working example that shows the problem.