Dear Tiago/community,

I have been reading your 2015 article on layered networks, where you illustrate favourable model fits on layered formulation compared to a non-layered null.

But, I was wondering if you could please clarify the appropriate null model from which the comparisons in model entropy are then made?

Considering the following example, which is the example on the graph-tool website:

g = gt.collection.ns["new_guinea_tribes”]

#layered model

state = gt.minimize_nested_blockmodel_dl(g,state_args=dict(base_type=gt.LayeredBlockState,state_args=dict(ec=g.ep.weight, layers=True)))

print(state.entropy())

##This gives me a model entropy of 166.92854839491955

#Presumably the null model is as follows, where one simply does not pass a layered block state, and fits the model as if a singular edge layer?

state_null = gt.minimize_nested_blockmodel_dl(g)

print(state_null.entropy())

##But this gives me a model entropy of 113.74058711521675

I’m a little confused by this as, whilst the example on the website makes perfect sense, then the model entropy seems to suggest I should favour the non-layered null?

Thank you for your clarification.

BW

James

Hi James,

I think that the main difference is that using the Layered Block State you do retrieve a community structure, whereas in the other case you don’t.

Also, given the size of the network, I wouldn’t read much into it, but maybe if you check the posterior distribution of partitions you will find that these are two competing explanations of the data.

This is my interpretation (might be incorrect though).

Best,

Valerio

attachment.html (4.69 KB)

Hi Valerio,

Many thanks for getting back to me. I have re-read Tiago's paper<https://journals.aps.org/pre/abstract/10.1103/PhysRevE.92.042807> and instead wonder whether the most appropriate null model for this example is instead with random allocation of layers. As follows with this example:

g = gt.collection.ns["new_guinea_tribes"]

#layered model

state = gt.minimize_nested_blockmodel_dl(g,

state_args=dict(base_type=gt.LayeredBlockState,

state_args=dict(ec=g.ep.weight, layers=True)))

print(state.entropy())

##null with no layers

state_null = gt.minimize_nested_blockmodel_dl(g)

print(state_null.entropy())

##null but with random layers.

null_edge = g.new_edge_property("int")

rand = np.random.randint(len(np.unique(g.ep.weight.get_array())), size=g.ep.weight.get_array().shape[0])

counter=0

for e in g.edges():

null_edge[e] = rand[counter]

counter+=1

state_null_v2= gt.minimize_nested_blockmodel_dl(g,

state_args=dict(base_type=gt.LayeredBlockState,

state_args=dict(ec=null_edge, layers=True)))

print(state_null_v2.entropy())

This example now gives me an entropy difference favouring the layered model (state) compared to the null model with random layer allocation (state_null_v2).

Is that correct? Does that make sense?

BW

James

attachment.html (8.62 KB)