Hello Tiago.

There is the example code which cause the same infinite looping:

*import numpy as np*

*from graph_tool.all import **

*gg = collection.data['football']*

*dummy_type = np.array(map(lambda x: int('state' in x.lower()),

gg.vp.label.get_2d_array([0])[0]))*

*dummy_weights = gg.new_edge_property('double')*

*dummy_weights.a = edge_endpoint_property(gg, gg.vp.value, "source").a -

edge_endpoint_property(gg, gg.vp.value, "target").a*

*test = minimize_nested_blockmodel_dl(gg, deg_corr=False, b_min =

dummy_type, B_max = 50, B_min=5,*

* state_args=dict(clabel =

dummy_type, *

* recs =

[dummy_weights],*

* rec_types =

['real-normal']*

* ), **

verbose=True)*

My investigation showed that the issue occurs only if I have simultaneously

two things:

*clabel constraint* and *real-normal covariates* for edges. If I remove one

of them - everything is fine, If I change the covariates type from normal

to any other -- again, no looping. This specific *clabel* forces for

bisection search degenerated bounds (min_state = max_state), and normal

covariates some how affects on entropy calculus.

As I understand, the problem is in potentially wrong entropy calculation

here:

https://git.skewed.de/count0/graph-tool/blob/master/src/graph_tool/inference/nested_blockmodel.py#L894

As we saw from outputs, code is trying to replace (N=2, B=1) with (N=2,

B=2) and is getting lower entropy.

Here

https://git.skewed.de/count0/graph-tool/blob/master/src/graph_tool/inference/nested_blockmodel.py#L480

I found that only in case of normal covariates you have subtraction for

entropy, hence potentially smaller entropy

for more sophisticated model.

Hope, it helps.

Thank you,

Valeriy.

attachment.html (7.26 KB)