> It's a sum in log-space, so it's incremental.
Can you show me in code how to use
logsumexp in such a way such that I don't need to store 6,000,000 * k floats in memory where k is the number of sampling iterations?
>
Most other link prediction methods only provide a ranking or scores of
the edges. In order to actually predict edges, all these methods require
you to determine how many edges should be actually placed.
In your setup you still end up with a probability in the end. What probability cutoff you consider to be a "real missing edge" or not is still arbitrary, correct? Using a binary classifier and whatever link prediction features you also can get probabilities in the end. For instance, and relating to the point of choosing that f ratio you mention, I counted some probabilities to get an estimate on the number of "actually missing" edges. I've confirmed that these probabilities are fairly well calibrated by removing edges and predicting, etc.
39 with probability above 0.80
143 with probability above 0.70
318 with probability above 0.60
(Funny note, the 20th most likely link prediction is davidlazer -> tiagopeixoto. This small network I'm testing with is my Twitter network.)
These are only out of the 129,971 non-edges which are at distance 2. So, roughly, it seems like 100-400~ missing edges is reasonable. The network reconstruction method with defaults and 1000 sampling iterations gave me 13 possible non-edges, the highest of which has probability 0.07.
>
You should also try to consider using a different framework, where you
leave the missing edges "unobserved", instead of transforming them into
nonedges. The distinction is important, and changes the reconstruction
problem. You can achieve this with MeasuredBlockState by setting the
corresponding n value to zero.
Do you mean setting n_default=0? Just tried that and it appears like almost all of my existing edges are missing from the marginal graph. Not sure what's going on there. I tried setting mu and nu to extreme values control this(?) but it doesn't appear to help much. With alpha=1, beta=1, mu=1, nu=9999: 20733 out of 20911 of my original edges are missing. What's happening here?
>
A bound prior like this is not possible, but you can get something close
with the beta prior, where it is centered on a specific value, say 0.1
with an arbitrary variance.
With this I seem to, at least, be getting non-zero probabilities for more edges.
Let m be the number of non-edges found with non-zero probability, and n be the number of edges in original graph not found in marginal graphs. The following after 1000 sampling iterations:
alpha=10, beta=90, m=349, n=4
alpha=100, beta=900, m=2381, n=0
alpha=20, beta=80, m=632, n=17
alpha=200, beta=800, m=4031, n=10
Okay, so to summarize what's happening here: if alpha and beta are both 1 then p is uniform from [0,1], and it appears the model is favoring very low p for my graph. By setting alpha and beta I can control the distribution of p. By maintaining this alpha/(alpha+beta) ratio while increasing alpha and beta I can decrease the variance, which forces the p to be even higher (for the same mean), because it wants to go to the lower tail. That correct?
Is there any reason not to set something very aggressive like alpha=500, beta=500? I just tested this is as a feature in my binary classifier with 5000 sampling iterations and it performed much worse than the conditional prob setup (with only 100 sampling iterations). I expected this marginal prob setup to work comparably to the conditional prob setup. Thoughts?
Thanks for your help, as always