Still the same using the docker image I just pulled, but seems not to occur for the real-exponential model. I opened an issue in the git and hope you can resolve it!

Thanks for the quick reply,
Katharina Baum, PhD
Postdoctoral Fellow, Proteome and Genome Research Unit

Luxembourg Institute of Health
Department of Oncology
1A-B, rue Thomas Edison, L-1445 Strassen
Fax: +352 26970-719

Join us on Facebook and follow our activities on Twitter:

This message (including any attachments) is intended for the addressee only and may contain confidential and/or privileged information and/or information protected by intellectual property rights. If you have received this message  by mistake, please notify the sender by return e-mail and delete this message from your system. You should not use, alter, copy or distribute this message or disclose its contents to anyone. E-mail transmission can not be guaranteed to be secure or error free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. Luxembourg Institute of Health shall not be responsible nor liable for the proper and complete transmission of the information contained in this communication nor for any delay in its receipt or damage to your system.

From: graph-tool <> on behalf of Tiago de Paula Peixoto <>
Sent: 26 March 2018 15:14
Subject: Re: [graph-tool] get_edges_prob() changes the entropy of the state object

On 26.03.2018 12:49, Katharina Baum wrote:
> I currently started using graph-tool: nice software, excellent
> documentation, thank you!
> However, I stumbled on some (for me) unexpected behavior when using the
> get_edges_prob() function with a BlockState of a weighted network
> (graph-tool version 2.26, Python 2.7 as well as Python 3.6). When calling
> the get_edges_prob() function to a state, its entropy is altered, and
> subsequent calls of get_edges_prob() deliver different results.
> "Luckily", I could reproduce the observed behavior with a dataset from the
> graph-tool collection (with arguably small alterations, but the introduced
> differences are a lot bigger in my networks).
> import graph_tool as gt
> import graph_tool.collection as gtc
> import graph_tool.inference as gti
> state=gti.minimize_blockmodel_dl(g,state_args=dict(recs=[g.ep.value],rec_types=['real-normal']))
> original_entropy=state.entropy()
> edge_prob=[]
> for i in range(10000):
>         edge_prob.append(state.get_edges_prob(missing=[],spurious=[(0,2)]))
> original_entropy
> state.entropy() #entropy is different from original!
> edge_prob[0] #first call of get_edges_prob() delivers other results than last
> edge_prob[-1]
> For me, this is really unexpected. What is happening there, and/or how this
> can be fixed?
> Smaller further experiments showed that this also happens to
> NestedBlockStates (of course), but seems not to happen for models lacking
> edge covariates...

This seems indeed like a bug. I suspect it has to do with the "real-normal"
model. Can you open an issue in the website with the above example so I can
keep track of this and fix it?

Could you also test this with "real-exponential" instead of "real-normal"
and also with the current git version?


Tiago de Paula Peixoto <>
graph-tool mailing list