Hello,
I am trying to implement get_p_posterior()
and get_q_posterior()
for the MixedMeasuredBlockState
(on the Python side).
I looked into the code for the same methods in MeasuredBlockState
. The naming of the variables in there is not the same as the one in the paper, though I think I understood that the following conversion applies
N_c = \mathcal{M}_p = \sum_{i < j}{n_{ij}} \\
M_c = \mathcal{E}_p = \sum_{i < j}{n_{ij}A_{ij}} \\
X_c = \mathcal{X}_p = \sum_{i < j}{x_{ij}} \\
T_c = \mathcal{T}_p = \sum_{i < j}{x_{ij}A_{ij}} \\
where the subscript c means that the variable is from the code, p means that the variable is from the paper.
From the code of MeasuredBlockState.get_q_posterior
returns X-T+self.mu, N-X-(M-T) + self.nu
so what it is returning is basically
\sum_{i < j}{x_{ij}} - \sum_{i < j}{x_{ij}A_{ij}} + \mu, \sum_{i < j}{n_{ij}} - \sum_{i < j}{x_{ij}} - (\sum_{i < j}{n_{ij}A_{ij}} - \sum_{i < j}{x_{ij}A_{ij}}) + \nu
which simplifies to
\sum_{i < j}{x_{ij}(1-A_{ij})} + \mu, \sum_{i < j}{(n_{ij} - x_{ij})(1-A_{ij})} + \nu
I assume that a version of this function for the MixedMeasuredBlockState
should simply split the sum and return a vector of tuples, in the form
({x_{ij}(1-A_{ij})} + \mu, {(n_{ij} - x_{ij})(1-A_{ij})} + \nu)
where n_{ij} and x_{ij} are defined on the original graph, while A_{ij} is the edge probability from the graph obtained from collect_marginal
.
In an analogous manner, I guess MixedMeasuredBlockState.get_p_posterior
should return a vector of tuples in the form
((n_{ij} - x_{ij})A_{ij} + \alpha, x_{ij}A_{ij} + \beta)
I hope I didn’t completely botch the maths.
Questions:
- Is this the correct way of proceeding?
- To plot these results, do I just take the expected value for the distribution of each individual edge?