Repeated partition across level

I have found in this example that minimize_nested_blockmodel_dl can return a state in which the same partition is repeated across several levels (in this case levels 2-7 all have same two groups).
Removing one (or more) of these levels reduces (marginally) the model’s negative joint log-likelihood.

>>> import graph_tool.all as gt
>>> gt.__version__
'2.77 (commit , )'
>>> import numpy as np
>>> g = gt.collection.ns["foodweb_baywet"]
>>> 
>>> y = g.ep.weight.copy()
>>> y.a = np.log(y.a)
>>> 
>>> state_ln = gt.minimize_nested_blockmodel_dl(
...      g,
...      state_args=dict(
...              recs=[y],
...              rec_types=["real-normal"]
...     )
... )
>>> 
>>> for i in range(100):
...     ret = state_ln.multiflip_mcmc_sweep(niter=10, beta=np.inf)
... 
>>> state_ln.print_summary()
l: 0, N: 128, B: 7
l: 1, N: 7, B: 5
l: 2, N: 5, B: 2
l: 3, N: 2, B: 2
l: 4, N: 2, B: 2
l: 5, N: 2, B: 2
l: 6, N: 2, B: 2
l: 7, N: 2, B: 2
>>> state_ln_new = state_ln.copy(bs=state_ln.get_bs()[:-1])
>>> state_ln_new.print_summary()
l: 0, N: 128, B: 7
l: 1, N: 7, B: 5
l: 2, N: 5, B: 2
l: 3, N: 2, B: 2
l: 4, N: 2, B: 2
l: 5, N: 2, B: 2
l: 6, N: 2, B: 2
>>> print('original state:', state_ln.entropy())
original state: 9588.688003175661
>>> print('state removing l=7:', state_ln_new.entropy())
state removing l=7: 9587.301708814493

Can you please explain exactly what you mean by "removing one (or more) of these layers” and by how much the log-likelihood changes? Do you keep the total number of layers fixed, or is this reduced as well?

As a rule it’s important to show a minimal and complete working example that shows the problem. A vague description is not very helpful.

Thank you for the reply.
Firstly, I apologize as there was some confusion in my original post between the terms “layer” and “level” (I have now edited it).

What I mean by “remove the levels” is shown in the example in my original post: I copy the original state and pass as a bs parameter the hierarchical partition of the original state without the upper level (l=7 in the example). Maybe this is not the correct way to proceed.

The change in the log likelihood is also shown in the example. The variation is minimal (~1.4 nats).

Oh I had missed the second part of your example, I apologize!

The removal of the last level in your example only reduces the negative log-likelihood because the total length has changed — this also contributes to the likelihood.

If instead you would do

state_ln_new = state_ln.copy(bs=state_ln.get_bs()[:-1] + [zeros(1)])

i.e. replace the last level with with only one group, then the negative log-likelihood would increase.

So, it’s a correct but somewhat counterintuitive artefact of this model/data that it will fill up all the available levels, but if the total number is reduced, this improves the fit. The current code is based on the idea that the total number of levels is fixed, so the proper way to proceed is to constraint this a posteriori if repeated levels are found.

2 Likes