This vignette covers valued network modeling in the ergm
framework, with an emphasis on count data. Examples pertinent to rank data can be found in the vignette in the ergm.rank
package.
network
(Butts (2008)) objects have three types of attributes:
An edge attribute is only defined for edges that exist in the network. Thus, in a matter of speaking, to set an edge value, one first has to create an edge and then set its attribute.
As with network and vertex attributes, edge attributes that have been set can be listed with list.edge.attributes
. Every network has at least one edge attribute: "na"
, which, if set to TRUE
, marks an edge as missing.
There are several ways to create valued networks for use with ergm
. Here, we will demonstrate two of the most straightforward approaches.
The first dataset that we’ll be using is the (in)famous Sampson’s monks. Dataset samplk
in package ergm
contains three (binary) networks: samplk1
, samplk2
, and samplk3
, containing the Monks’ top-tree friendship nominations at each of the three survey time points. We are going to construct a valued network that pools these nominations.
Method 1: From a sociomatrix In many cases, a valued sociomatrix is available (or can easily be constructed). In this case, we’ll build one from the Sampson data.
library(ergm.count) # Also loads ergm.
data(samplk)
ls()
## [1] "samplk1" "samplk2" "samplk3"
as.matrix(samplk1)[1:5,1:5]
## John Bosco Gregory Basil Peter Bonaventure
## John Bosco 0 0 1 0 1
## Gregory 1 0 0 0 0
## Basil 1 1 0 0 0
## Peter 0 0 0 0 1
## Bonaventure 0 0 0 1 0
# Create a sociomatrix totaling the nominations.
samplk.tot.m<-as.matrix(samplk1)+as.matrix(samplk2)+as.matrix(samplk3)
samplk.tot.m[1:5,1:5]
## John Bosco Gregory Basil Peter Bonaventure
## John Bosco 0 1 2 0 2
## Gregory 3 0 0 0 0
## Basil 3 1 0 0 0
## Peter 0 0 0 0 3
## Bonaventure 1 0 0 3 0
# Create a network where the number of nominations becomes an attribute of an edge.
samplk.tot <- as.network(samplk.tot.m, directed=TRUE, matrix.type="a",
ignore.eval=FALSE, names.eval="nominations" # Important!
)
# Add vertex attributes. (Note that names were already imported!)
samplk.tot %v% "group" <- samplk1 %v% "group" # Groups identified by Sampson
samplk.tot %v% "group"
## [1] "Turks" "Turks" "Outcasts" "Loyal" "Loyal" "Loyal"
## [7] "Turks" "Waverers" "Loyal" "Waverers" "Loyal" "Turks"
## [13] "Waverers" "Turks" "Turks" "Turks" "Outcasts" "Outcasts"
# We can view the attribute as a sociomatrix.
as.matrix(samplk.tot,attrname="nominations")[1:5,1:5]
## John Bosco Gregory Basil Peter Bonaventure
## John Bosco 0 1 2 0 2
## Gregory 3 0 0 0 0
## Basil 3 1 0 0 0
## Peter 0 0 0 0 3
## Bonaventure 1 0 0 3 0
# Also, note that samplk.tot now has an edge if i nominated j *at least once*.
as.matrix(samplk.tot)[1:5,1:5]
## John Bosco Gregory Basil Peter Bonaventure
## John Bosco 0 1 1 0 1
## Gregory 1 0 0 0 0
## Basil 1 1 0 0 0
## Peter 0 0 0 0 1
## Bonaventure 1 0 0 1 0
Method 2: Form an edgelist Sociomatrices are simple to work with, but not very convenient for large, sparse networks. In the latter case, edgelists are often preferred. For our present case, suppose that instead of a sociomatrix we have an edgelist with values:
samplk.tot.el <- as.matrix(samplk.tot, attrname="nominations",
matrix.type="edgelist")
samplk.tot.el[1:5,]
## [,1] [,2] [,3]
## [1,] 2 1 3
## [2,] 3 1 3
## [3,] 5 1 1
## [4,] 6 1 2
## [5,] 7 1 1
# and an initial empty network.
samplk.tot2 <- samplk1 # Copy samplk1
delete.edges(samplk.tot2, seq_along(samplk.tot2$mel)) # Empty it out
samplk.tot2 #We could also have used network.initialize(18)
## Network attributes:
## vertices = 18
## directed = TRUE
## hyper = FALSE
## loops = FALSE
## multiple = FALSE
## bipartite = FALSE
## total edges= 0
## missing edges= 0
## non-missing edges= 0
##
## Vertex attribute names:
## cloisterville group vertex.names
##
## No edge attributes
samplk.tot2[samplk.tot.el[,1:2], names.eval="nominations", add.edges=TRUE] <-
samplk.tot.el[,3]
as.matrix(samplk.tot2,attrname="nominations")[1:5,1:5]
## John Bosco Gregory Basil Peter Bonaventure
## John Bosco 0 1 2 0 2
## Gregory 3 0 0 0 0
## Basil 3 1 0 0 0
## Peter 0 0 0 0 3
## Bonaventure 1 0 0 3 0
In general, the construction net[i,j, names.eval="attrname", add.edges=TRUE] <- value
can be used to modify individual edge values for attribute "attrname"
. This way, we can also add more than one edge attribute to a network. Note that network objects can support an almost unlimited number of vertex, edge, or network attributes, and that these attributes can contain any data type. (Not all data types are compatible with all interface methods; see ?network
and related documentation for more information.)
The other dataset we’ll be using is almost as (in)famous Zachary’s Karate Club dataset. We will be employing here a collapsed multiplex network that counts the number of social contexts in which each pair of individuals associated with the Karate Club in question interacted. A total of 8 contexts were considered, but as the contexts themselves were determined by the network process, this limit itself can be argued to be endogenous.
Over the course of the study, the club split into two factions, one led by the instructor (“Mr. Hi”) and the other led by the Club President (“John A.”). Zachary also recorded the faction alignment of every regular attendee in the club. This dataset is included in the ergm.count
package, as zach
.
The network
’s plot
method for network
s can be used to plot a sociogram of a network. When plotting a valued network, we it is often useful to color the ties depending on their value. Function gray
can be used to generate a gradient of colors, with gray(0)
generating black and gray(1)
generating white. This can then be passed to the edge.col
argument of plot.network
.
Sampson’s Monks For the monks, let’s pass value data using a matrix.
par(mar=rep(0,4))
samplk.ecol <-
matrix(gray(1 - (as.matrix(samplk.tot, attrname="nominations")/3)),
nrow=network.size(samplk.tot))
plot(samplk.tot, edge.col=samplk.ecol, usecurve=TRUE, edge.curve=0.0001,
displaylabels=TRUE, vertex.col=as.factor(samplk.tot%v%"group"))
Edge color can also be passed as a vector of colors corresponding to edges. It’s more efficient, but the ordering in the vector must correspond to network
object’s internal ordering, so it should be used with care. Note that we can also vary line width and/or transparency in the same manner:
par(mar=rep(0,4))
valmat<-as.matrix(samplk.tot,attrname="nominations") #Pull the edge values
samplk.ecol <-
matrix(rgb(0,0,0,valmat/3),
nrow=network.size(samplk.tot))
plot(samplk.tot, edge.col=samplk.ecol, usecurve=TRUE, edge.curve=0.0001,
displaylabels=TRUE, vertex.col=as.factor(samplk.tot%v%"group"),
edge.lwd=valmat^2)
plot.network
has may display options that can be used to customize one’s data display; see ?plot.network
for more.
Zachary’s Karate Club In the following plot, we plot those strongly aligned with Mr. Hi as red, those with John A. with purple, those neutral as green, and those weakly aligned with colors in between.
data(zach)
zach.ecol <- gray(1 - (zach %e% "contexts")/8)
zach.vcol <- rainbow(5)[zach %v% "faction.id"+3]
par(mar=rep(0,4))
plot(zach, edge.col=zach.ecol, vertex.col=zach.vcol, displaylabels=TRUE)
ergm.count
Many of the functions in package ergm
, including ergm
, simulate
, and summary
, have been extended to handle networks with valued relations. They switch into this “valued” mode when passed the response
argument, specifying the name of the edge attribute to use as the response variable. For example, a new valued term sum
evaluates the sum of the values of all of the relations: \(\sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j}\). So,
summary(samplk.tot~sum)
## Error: ERGM term 'sum' function 'InitErgmTerm.sum' not found.
produces an error (because no such term has been implemented for binary mode), while
summary(samplk.tot~sum, response="nominations")
## sum
## 168
gives the summary statistics. We will introduce more statistics shortly. First, we need to introduce the notion of valued ERGMs.
For a more in-depth discussion of the following, see (Krivitsky (2012)).
Valued ERGMs differ from standard ERGMs in two related ways. First, the support of a valued ERGM (unlike its unvalued counterpart) is over a set of valued graphs; this is a substantial difference from the unvalued case, as valued graph support sets (even for fixed \(N\)) are often infinite (or even uncountable). Secondly, in defining a valued ERGM one must specify the reference measure (or distribution) with respect to which the model is defined. (In the unvalued case, there is a generic way to do this, which we employ tacitly – that is no longer the case for general valued ERGMs.) We discuss some of these issues further below.
Notationally, a valued ERGM (for discrete variables) looks like this: \[\text{Pr}_{h,\boldsymbol{g}}(\boldsymbol{Y}=\boldsymbol{y};\boldsymbol{\theta})=\frac{h(\boldsymbol{y})\exp\mathchoice{\left({\boldsymbol{\theta}{}}^\top{\boldsymbol{g}(\boldsymbol{y})}\right)}{({\boldsymbol{\theta}{}}^\top{\boldsymbol{g}(\boldsymbol{y})})}{({\boldsymbol{\theta}{}}^\top{\boldsymbol{g}(\boldsymbol{y})})}{({\boldsymbol{\theta}{}}^\top{\boldsymbol{g}(\boldsymbol{y})})}}{\kappa_{h,\boldsymbol{g}}(\boldsymbol{\theta})},\ {\boldsymbol{y}\in\mathcal{Y}},\] where \(\mathcal{Y}\) is the support. The normalizing constant is defined by \[\kappa_{h,\boldsymbol{g}}(\boldsymbol{\theta})=\sum_{\boldsymbol{y}\in\mathcal{Y}}h(\boldsymbol{y})\exp\mathchoice{\left({\boldsymbol{\theta}{}}^\top{\boldsymbol{g}(\boldsymbol{y})}\right)}{({\boldsymbol{\theta}{}}^\top{\boldsymbol{g}(\boldsymbol{y})})}{({\boldsymbol{\theta}{}}^\top{\boldsymbol{g}(\boldsymbol{y})})}{({\boldsymbol{\theta}{}}^\top{\boldsymbol{g}(\boldsymbol{y})})}.\] The similarity with ERGMs in the unvalued case is evident, notwithstanding the above caveats.
New concept: a reference distribution With binary ERGMs, we only concern ourselves with presence and absence of ties among actors — who is connected with whom? If we want to model values as well, we need to think about who is connected with whom and how strong or intense these connections are. In particular, we need to think about how the values for connections we measure are distributed. The reference distribution (a reference measure, for the mathematically inclined) specifies the model for the data before we add any ERGM terms, and is the first step in modeling these values. The reference distribution is specified using a one-sided formula as a reference
argument to an ergm
or simulate
call. Running
help("ergm-references")
will list the choices implemented in the various packages, and are given as a one-sided formula.
Conceptually, it has two ingredients: the sample space and the baseline distribution (\(h(\boldsymbol{y})\)). An ERGM that “borrows” these from a distribution \(X\) for which we have a name is called an \(X\)-reference ERGM.
The sample space For binary ERGMs, the sample space (or support) \(\mathcal{Y}\) — the set of possible networks that can occur — is usually some subset of \(2^\mathbb{Y}\), the set of all possible ways in which relationships among the actors may occur.
For the sample space of valued ERGMs, we need to define \(\mathbb{S}\), the set of possible values each relationship may take. For example, for count data, that’s \(\mathbb{S}=\{0,1,\dotsc, s \}\) if the maximum count is \( s \) and \(\{0,1,\dotsc\}\) if there is no a priori upper bound. Having specified that, \(\mathcal{Y}\) is defined as some subset of \(\mathbb{S}^\mathbb{Y}\): the set of possible ways to assign to each relationship a value.
As with binary ERGMs, other constraints like degree distribution may be imposed on \(\mathcal{Y}\).
\(h(\boldsymbol{y})\): The baseline distribution What difference does it make?
Suppose that we have a sample space with \(\mathbb{S}=\{0,1,2,3\}\) (e.g., number of monk–monk nominations) and let’s have one ERGM term: the sum of values of all relations: \(\sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j}\): \[\text{Pr}_{h,\boldsymbol{g}}(\boldsymbol{Y}=\boldsymbol{y};\boldsymbol{\theta})\propto h(\boldsymbol{y})\exp\mathchoice{\left(\boldsymbol{\theta}{} \sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j}\right)}{(\boldsymbol{\theta}{} \sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j})}{(\boldsymbol{\theta}{} \sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j})}{(\boldsymbol{\theta}{} \sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j})}.\] There are two values for \(h(\boldsymbol{y})\) that might be familiar:
What do they look like? Let’s simulate!
y <- network.initialize(2,directed=FALSE) # A network with one dyad!
## Discrete Uniform reference
# 0 coefficient: discrete uniform
sim.du3<-simulate(y~sum, coef=0, reference=~DiscUnif(0,3),
response="w",output="stats",nsim=1000)
# Negative coefficient: truncated geometric skewed to the right
sim.trgeo.m1<-simulate(y~sum, coef=-1, reference=~DiscUnif(0,3),
response="w",output="stats",nsim=1000)
# Positive coefficient: truncated geometric skewed to the left
sim.trgeo.p1<-simulate(y~sum, coef=+1, reference=~DiscUnif(0,3),
response="w",output="stats",nsim=1000)
# Plot them:
par(mfrow=c(1,3))
hist(sim.du3,breaks=diff(range(sim.du3))*4)
hist(sim.trgeo.m1,breaks=diff(range(sim.trgeo.m1))*4)
hist(sim.trgeo.p1,breaks=diff(range(sim.trgeo.p1))*4)
## Binomial reference
# 0 coefficient: Binomial(3,1/2)
sim.binom3<-simulate(y~sum, coef=0, reference=~Binomial(3),
response="w",output="stats",nsim=1000)
# -1 coefficient: Binomial(3, exp(-1)/(1+exp(-1)))
sim.binom3.m1<-simulate(y~sum, coef=-1, reference=~Binomial(3),
response="w",output="stats",nsim=1000)
# +1 coefficient: Binomial(3, exp(1)/(1+exp(1)))
sim.binom3.p1<-simulate(y~sum, coef=+1, reference=~Binomial(3),
response="w",output="stats",nsim=1000)
# Plot them:
par(mfrow=c(1,3))
hist(sim.binom3,breaks=diff(range(sim.binom3))*4)
hist(sim.binom3.m1,breaks=diff(range(sim.binom3.m1))*4)
hist(sim.binom3.p1,breaks=diff(range(sim.binom3.p1))*4)
Now, suppose that we don’t have an a priori upper bound on the counts — \(\mathbb{S}=\{0,1,\dotsc\}\) — then there are two familiar reference distributions:
sim.geom<-simulate(y~sum, coef=log(2/3), reference=~Geometric,
response="w",output="stats",nsim=1000)
mean(sim.geom)
## [1] 2.047
sim.pois<-simulate(y~sum, coef=log(2), reference=~Poisson,
response="w",output="stats",nsim=1000)
mean(sim.pois)
## [1] 1.933
Similar means. But, what do they look like?
par(mfrow=c(1,2))
hist(sim.geom,breaks=diff(range(sim.geom))*4)
hist(sim.pois,breaks=diff(range(sim.pois))*4)
Where did log(2)
and log(2/3)
come from? Later.
Warning: Parameter space constrints What happens if we simulate from a geometric-reference ERGM with all coefficients set to 0?
par(mfrow=c(1,1))
sim.geo0<-simulate(y~sum, coef=0, reference=~Geometric,
response="w",output="stats",nsim=100,
control=control.simulate(MCMC.burnin=0,MCMC.interval=1))
mean(sim.geo0)
## [1] 1628700
plot(c(sim.geo0),xlab="MCMC iteration",ylab="Value of the tie")
Why does it do that? Because \[ \text{Pr}_{h,\boldsymbol{g}}(\boldsymbol{Y}=\boldsymbol{y};\boldsymbol{\theta})=\frac{\exp\mathchoice{\left(\boldsymbol{\theta}\sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j}\right)}{(\boldsymbol{\theta}\sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j})}{(\boldsymbol{\theta}\sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j})}{(\boldsymbol{\theta}\sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j})}}{\kappa_{h,\boldsymbol{g}}(\boldsymbol{\theta})} \] for \(\boldsymbol{\theta}\ge 0\), is not a valid distribution, because \(\kappa_{h,\boldsymbol{g}}(\boldsymbol{\theta})=\infty\). Using reference=~Geometric
can be dangerous for this reason. This issue only arises with ERGMs that have an infinite sample space.
GLM-style terms Many of the familiar ERGM effects can be modeled using the very same terms in the valued case, but applied a little differently.
Any dyad-independent binary ERGM statistic can be expressed as \(\boldsymbol{g}_{\text{k}}=\sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{x}_{k,i,j}\boldsymbol{y}_{i,j}\) for some covariate matrix \(\boldsymbol{x}_k\). If \(\boldsymbol{y}_{i,j}\) is allowed to have values other than \(0\) and \(1\), then simply using such a term in a Poisson-reference ERGM creates the familiar log-linear effect. Similarly, in a Binomial-reference ERGM, such terms produce an effect on log-odds of a success.
The good news is that almost every dyad-independent ergm
term has been reimplemented to allow this. It is invoked by specifying “form="sum"
” argument for one of the terms inherited from binary ERGMs, though this not required, as it’s the default. Also, note that for valued ERGMs, the “intercept” term is sum
, not edges
.
help("ergm-terms")
has the complete list across all the loaded packages. In particular, the one in package ergm
has each term be tagged with whether it’s binary or valued.
Example: Sampson’s Monks For example, we can fit the equivalent of logistic regression on the probability of nomination, with every ordered pair of monks observed 3 times. We will look at differential homophily on group. That is, \(\boldsymbol{Y}\!_{i,j}{\stackrel{\mathrm{ind.}}{\sim}}\, \text{Binomial}(3,\boldsymbol{\pi}_{i,j})\) where \[ \begin{align*} \text{logit}(\boldsymbol{\pi}_{i,j}) & = \boldsymbol{\beta}_1 + \boldsymbol{\beta}_2 \mathbb{I}\left(\text{$i$ and $j$ are both in the Loyal Opposition}\right) \\ & + \boldsymbol{\beta}_3 \mathbb{I}\left(\text{$i$ and $j$ are both Outcasts}\right) + \boldsymbol{\beta}_4 \mathbb{I}\left(\text{$i$ and $j$ are both Young Turks}\right) \\ & + \boldsymbol{\beta}_5 \mathbb{I}\left(\text{$i$ and $j$ are both Waverers}\right) \end{align*} \]
samplk.tot.nm <-
ergm(samplk.tot~sum + nodematch("group",diff=TRUE,form="sum"),
response="nominations", reference=~Binomial(3))
mcmc.diagnostics(samplk.tot.nm)
Note that it looks like it’s fitting the model twice. This is because the first run is using an approximation technique called constrastive divergence to find a good starting value for the MLE fit.
summary(samplk.tot.nm)
## Call:
## ergm(formula = samplk.tot ~ sum + nodematch("group", diff = TRUE,
## form = "sum"), response = "nominations", reference = ~Binomial(3))
##
## Monte Carlo Maximum Likelihood Results:
##
## Estimate Std. Error MCMC % z value Pr(>|z|)
## sum -2.3203 0.1349 0 -17.201 <1e-04 ***
## nodematch.sum.group.Loyal 2.2540 0.2874 0 7.842 <1e-04 ***
## nodematch.sum.group.Outcasts 3.5776 0.5606 0 6.381 <1e-04 ***
## nodematch.sum.group.Turks 2.2055 0.2250 0 9.802 <1e-04 ***
## nodematch.sum.group.Waverers 1.0636 0.5783 0 1.839 0.0659 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Null Deviance: 0.0 on 306 degrees of freedom
## Residual Deviance: -560.8 on 301 degrees of freedom
##
## Note that the null model likelihood and deviance are defined to be 0.
## This means that all likelihood-based inference (LRT, Analysis of
## Deviance, AIC, BIC, etc.) is only valid between models with the same
## reference distribution and constraints.
##
## AIC: -550.8 BIC: -532.1 (Smaller is better. MC Std. Err. = 4.147)
Based on this, we can say that the odds of a monk nominating another monk not in the same group during a given time step are \(\exp\mathchoice{\left(\boldsymbol{\beta}_1\right)}{(\boldsymbol{\beta}_1)}{(\boldsymbol{\beta}_1)}{(\boldsymbol{\beta}_1)}=\exp\mathchoice{\left(-2.3203\right)}{(-2.3203)}{(-2.3203)}{(-2.3203)}=0.0982\), that the odds of a Loyal Opposition monk nominating another Loyal Opposition monk are \(\exp\mathchoice{\left(\boldsymbol{\beta}_2\right)}{(\boldsymbol{\beta}_2)}{(\boldsymbol{\beta}_2)}{(\boldsymbol{\beta}_2)}=\exp\mathchoice{\left(2.254\right)}{(2.254)}{(2.254)}{(2.254)}=9.5259\) times higher, etc..
Example: Zachary’s Karate Club We will use a Poisson log-linear model for the number of contexts in which each pair of individuals interacted, as a function of whether this individual is a faction leader (Mr. Hi or John A.) That is, \(\boldsymbol{Y}\!_{i,j}{\stackrel{\mathrm{ind.}}{\sim}}\text{Poisson}(\boldsymbol{\mu}_{i,j})\) where \[ \log(\boldsymbol{\mu}_{i,j})=\boldsymbol{\beta}_1 + \boldsymbol{\beta}_2 (\mathbb{I}\left(\text{$i$ is a faction leader}\right) + \mathbb{I}\left(\text{$j$ is a faction leader}\right)) \]
We will do this by constructing a dummy variable, a vertex attribute "leader"
:
unique(zach %v% "role")
## [1] "Instructor" "Member" "President"
# Vertex attr. "leader" is TRUE for Hi and John, FALSE for others.
zach %v% "leader" <- zach %v% "role" != "Member"
zach.lead <-
ergm(zach~sum + nodefactor("leader"),
response="contexts", reference=~Poisson)
mcmc.diagnostics(zach.lead)
NB: We could also write “nodefactor(~role!="Member")
” to get the same result. This is new in ergm
3.10.
summary(zach.lead)
## Call:
## ergm(formula = zach ~ sum + nodefactor("leader"), response = "contexts",
## reference = ~Poisson)
##
## Monte Carlo Maximum Likelihood Results:
##
## Estimate Std. Error MCMC % z value Pr(>|z|)
## sum -1.22260 0.08312 0 -14.71 <1e-04 ***
## nodefactor.sum.leader.TRUE 1.43777 0.12033 0 11.95 <1e-04 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Null Deviance: 0.0 on 561 degrees of freedom
## Residual Deviance: -357.5 on 559 degrees of freedom
##
## Note that the null model likelihood and deviance are defined to be 0.
## This means that all likelihood-based inference (LRT, Analysis of
## Deviance, AIC, BIC, etc.) is only valid between models with the same
## reference distribution and constraints.
##
## AIC: -353.5 BIC: -344.8 (Smaller is better. MC Std. Err. = 2.83)
Based on this, we can say that the expected number of contexts of interaction between two non-leaders is \(\exp\mathchoice{\left(\boldsymbol{\beta}_1\right)}{(\boldsymbol{\beta}_1)}{(\boldsymbol{\beta}_1)}{(\boldsymbol{\beta}_1)}=\exp\mathchoice{\left(-1.2226\right)}{(-1.2226)}{(-1.2226)}{(-1.2226)}=0.2945\), that the expected number of contexts of interaction between a leader and a non-leader is \(\exp\mathchoice{\left(\boldsymbol{\beta}_2\right)}{(\boldsymbol{\beta}_2)}{(\boldsymbol{\beta}_2)}{(\boldsymbol{\beta}_2)}=\exp\mathchoice{\left(1.4378\right)}{(1.4378)}{(1.4378)}{(1.4378)}=4.2113\) times higher, and that the expected number of contexts of interaction between the two leaders is \(\exp\mathchoice{\left(2\boldsymbol{\beta}_2\right)}{(2\boldsymbol{\beta}_2)}{(2\boldsymbol{\beta}_2)}{(2\boldsymbol{\beta}_2)}=\exp\mathchoice{\left(2\cdot 1.4378\right)}{(2\cdot 1.4378)}{(2\cdot 1.4378)}{(2\cdot 1.4378)}=17.7351\) times higher than that between two non-leaders. (Because the leaders were hostile to each other, this may not be a very good prediction.)
Sparsity and zero-modification It is often the case that in networks of counts, the network is sparse, yet if two actors do interact, their interaction count is relatively high. This amounts to zero-inflation.
We can model this using the binary-ERGM-based terms with the term nonzero
(\(\boldsymbol{g}_{\text{k}}=\sum_{{{(i,j)}\in\mathbb{Y}}}\mathbb{I}\left(\boldsymbol{y}_{i,j}\ne 0\right)\)) and GLM-style terms with argument form="nonzero"
: \(\boldsymbol{g}_{\text{k}}=\sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{x}_{k,i,j}\mathbb{I}\left(\boldsymbol{y}_{i,j}\ne 0\right)\). For example,
samplk.tot.nm.nz <-
ergm(samplk.tot~sum + nonzero + nodematch("group",diff=TRUE,form="sum"),
response="nominations", reference=~Binomial(3))
mcmc.diagnostics(samplk.tot.nm.nz)
summary(samplk.tot.nm.nz)
## Call:
## ergm(formula = samplk.tot ~ sum + nonzero + nodematch("group",
## diff = TRUE, form = "sum"), response = "nominations", reference = ~Binomial(3))
##
## Monte Carlo Maximum Likelihood Results:
##
## Estimate Std. Error MCMC % z value Pr(>|z|)
## sum -0.3133 0.1979 0 -1.583 0.113478
## nonzero -3.0015 0.3256 0 -9.218 < 1e-04 ***
## nodematch.sum.group.Loyal 1.2233 0.2194 0 5.574 < 1e-04 ***
## nodematch.sum.group.Outcasts 1.8847 0.5143 0 3.664 0.000248 ***
## nodematch.sum.group.Turks 1.1685 0.1753 0 6.667 < 1e-04 ***
## nodematch.sum.group.Waverers 0.5899 0.4480 0 1.317 0.187931
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Null Deviance: 0.0 on 306 degrees of freedom
## Residual Deviance: -652.3 on 300 degrees of freedom
##
## Note that the null model likelihood and deviance are defined to be 0.
## This means that all likelihood-based inference (LRT, Analysis of
## Deviance, AIC, BIC, etc.) is only valid between models with the same
## reference distribution and constraints.
##
## AIC: -640.3 BIC: -618 (Smaller is better. MC Std. Err. = 2.819)
fits a zero-modified Binomial model, with a coefficient on the number of non-zero relations \(-3.0015\) is negative and highly significant, indicating that there is an excess of zeros in the data relative to the binomial distribution, and given the rest of the model.
Other thresholds The following terms compute the number of dyads \({(i,j)}\) whose values \(\boldsymbol{y}_{i,j}\) fulfil their respective conditions: atleast(threshold=0)
, atmost(threshold=0)
, equalto(value=0, tolerance=0)
, greaterthan(threshold=0)
, ininterval(lower=-Inf, upper=+Inf, open=c(TRUE,TRUE))
, and smallerthan(threshold=0)
.
Dispersion Similarly, even if we may use Poisson as a starting distribution, the counts might be overdispersed or underdispersed relative to it. For now, ergm
offers two ways to do so:
Conway–Maxwell–Poisson
CMP
term to a Poisson- or geometric-reference ERGM.Fractional moments
sum(pow=1/2)
term to a Poisson-reference ERGM.sum
the first two moments of \(\sqrt{\boldsymbol{y}_{i,j}}\).Mutuality ergm
binary mutuality
statistic has the form \(\boldsymbol{g}_\leftrightarrow=\sum_{{{(i,j)}\in\mathbb{Y}}}\boldsymbol{y}_{i,j}\boldsymbol{y}_{j,i}\). It turns out that directly plugging counts into that statistic is a bad idea. mutuality(form)
is a valued ERGM term, permitting the following generalizations:
"geometric"
: \(\sum_{{{(i,j)}\in\mathbb{Y}}}\sqrt{\boldsymbol{y}_{i,j}\boldsymbol{y}_{j,i}}\) — can be viewed as uncentered covariance of variance-stabilized counts"min"
: \(\sum_{{{(i,j)}\in\mathbb{Y}}}\min{\boldsymbol{y}_{i,j},\boldsymbol{y}_{j,i}}\) — easiest to interpret"nabsdiff"
: \(\sum_{{{(i,j)}\in\mathbb{Y}}}-\lvert\boldsymbol{y}_{i,j}-\boldsymbol{y}_{j,i}\rvert\)The figure below visualizes their effects.
Individual heterogeneity Different actors may have different overall propensities to interact. This has been modeled using random effects (as in the \(p_2\) model and using degeneracy-prone terms like \(k\)-star counts.
ergm
implements a number of statistics to model it, but the one that seems to work best so far seems to be \[\boldsymbol{g}_{\text{actor cov.}}(\boldsymbol{y})=\sum_{i\in N}\frac{1}{n-2}\sum_{j,k\in \mathbb{Y}_{i}\land j<k}(\sqrt{\boldsymbol{y}_{i,j}}-\overline{\sqrt{\boldsymbol{y}}})(\sqrt{\boldsymbol{y}_{i,k}}-\overline{\sqrt{\boldsymbol{y}}}),\] essentially a measure of covariance between the \(\sqrt{\boldsymbol{y}_{i,j}}\)s incident on the same actor. The term nodecovar
implements it.
Triadic closure Finally, to generalize the notion of triadic closure, ergm
implements very flexible transitiveweights(twopath, combine, affect)
and similar cyclicalweights
statistics. The transitive weight statistic has the following general form: \[\boldsymbol{g}_{\text{$\boldsymbol{v}$}}(\boldsymbol{y})=\sum_{{{(i,j)}\in\mathbb{Y}}}v_{\text{affect}}\left(\boldsymbol{y}_{i,j},v_{\text{combine}}\left(v_{\text{2-path}}(\boldsymbol{y}_{i,k},\boldsymbol{y}_{k,j})_{k\in N\backslash\{i,j\}}\right)\right),\] and can be “customized” by varying the three functions
\(v_{\text{2-path}}\) Given \(\boldsymbol{y}_{i,k}\) and \(\boldsymbol{y}_{k,j}\), what is the strength of the two-path they form?
"min"
the minimum of their values — conservative"geomean"
their geometric mean — more able to detect effects, but more likely to cause “degeneracy”\(v_{\text{combine}}\) Given the strengths of the two-paths \(\boldsymbol{y}_{i\to k\to j}\) for all \(k\ne i,j\), what is the combined strength of these two-paths between \(i\) and \(j\)?
"max"
the strength of the strongest path — conservative; analogous to transitiveties
"sum"
the sum of path strength — more able to detect effects, but more likely to cause ; analogous to triangles
\(v_{\text{affect}}\) Given the combined strength of the two-paths between \(i\) and \(j\), how should they affect \(\boldsymbol{Y}\!_{i,j}\)?
"min"
conservative"geomean"
more able to detect effects, but more likely to cause “degeneracy”These effects are analogous to mutuality.
Sampson’s Monks Suppose that we want to fit a model with a zero-modified Binomial baseline, mutuality, transitive (hierarchical) triads, and cyclical (antihierarchical) triads, to this dataset:
samplk.tot.ergm <-
ergm(samplk.tot ~ sum + nonzero + mutual("min") +
transitiveweights("min","max","min") +
cyclicalweights("min","max","min"),
reference=~Binomial(3), response="nominations")
mcmc.diagnostics(samplk.tot.ergm)
summary(samplk.tot.ergm)
## Call:
## ergm(formula = samplk.tot ~ sum + nonzero + mutual("min") + transitiveweights("min",
## "max", "min") + cyclicalweights("min", "max", "min"), response = "nominations",
## reference = ~Binomial(3))
##
## Monte Carlo Maximum Likelihood Results:
##
## Estimate Std. Error MCMC % z value Pr(>|z|)
## sum -0.03358 0.20012 0 -0.168 0.8667
## nonzero -3.47406 0.31988 0 -10.861 <1e-04 ***
## mutual.min 1.37540 0.23956 0 5.741 <1e-04 ***
## transitiveweights.min.max.min 0.20833 0.17173 0 1.213 0.2251
## cyclicalweights.min.max.min -0.24477 0.14845 0 -1.649 0.0992 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Null Deviance: 0 on 306 degrees of freedom
## Residual Deviance: -608 on 301 degrees of freedom
##
## Note that the null model likelihood and deviance are defined to be 0.
## This means that all likelihood-based inference (LRT, Analysis of
## Deviance, AIC, BIC, etc.) is only valid between models with the same
## reference distribution and constraints.
##
## AIC: -598 BIC: -579.4 (Smaller is better. MC Std. Err. = 2.662)
What does it tell us? The negative coefficient on nonzero
suggests zero-inflation, there is strong evidence of mutuality, and the positive coefficient on transitive weights and negative on the cyclical weights suggests hierarchy, but they are not significant.
Zachary’s Karate Club Now, let’s try using Poisson to model the Zachary Karate Club data: a zero-modified Poisson, with potentially different levels of activity for the faction leaders, heterogeneity in actor activity level overall, and an effect of difference in faction membership, a model that looks like this:
summary(zach~sum+nonzero+nodefactor("leader")+absdiffcat("faction.id")+
nodecovar(center=TRUE,transform="sqrt"), response="contexts")
## sum nonzero
## 231.00000 78.00000
## nodefactor.sum.leader.TRUE absdiff.sum.faction.id.1
## 90.00000 74.00000
## absdiff.sum.faction.id.2 absdiff.sum.faction.id.3
## 12.00000 11.00000
## absdiff.sum.faction.id.4 nodecovar
## 10.00000 16.17248
A few other notes:
MCMC.prop.args=list(p0=...)
that can be used to control how much “zero-inflation” there is. It has a sensible default.Now, for the fit and the diagnostics:
zach.pois <-
ergm(zach~sum+nonzero+nodefactor("leader")+absdiffcat("faction.id")+
nodecovar(center=TRUE,transform="sqrt"),
response="contexts", reference=~Poisson, verbose=TRUE, control=snctrl(seed=0))
mcmc.diagnostics(zach.pois)
summary(zach.pois)
## Call:
## ergm(formula = zach ~ sum + nonzero + nodefactor("leader") +
## absdiffcat("faction.id") + nodecovar(center = TRUE, transform = "sqrt"),
## response = "contexts", reference = ~Poisson, control = snctrl(seed = 0),
## verbose = TRUE)
##
## Monte Carlo Maximum Likelihood Results:
##
## Estimate Std. Error MCMC % z value Pr(>|z|)
## sum 0.98510 0.08305 0 11.862 < 1e-04 ***
## nonzero -3.84957 0.25206 0 -15.273 < 1e-04 ***
## nodefactor.sum.leader.TRUE 0.20741 0.06516 0 3.183 0.001458 **
## absdiff.sum.faction.id.1 -0.09839 0.07769 0 -1.266 0.205339
## absdiff.sum.faction.id.2 -0.66447 0.19301 0 -3.443 0.000576 ***
## absdiff.sum.faction.id.3 -0.88509 0.22191 0 -3.989 < 1e-04 ***
## absdiff.sum.faction.id.4 -1.14910 0.24420 0 -4.705 < 1e-04 ***
## nodecovar 1.81566 0.22021 0 8.245 < 1e-04 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Null Deviance: 0.0 on 561 degrees of freedom
## Residual Deviance: -838.8 on 553 degrees of freedom
##
## Note that the null model likelihood and deviance are defined to be 0.
## This means that all likelihood-based inference (LRT, Analysis of
## Deviance, AIC, BIC, etc.) is only valid between models with the same
## reference distribution and constraints.
##
## AIC: -822.8 BIC: -788.1 (Smaller is better. MC Std. Err. = 1.336)
What does it tell us? The negative coefficient on nonzero
suggests zero-inflation, the faction leaders clearly have more activity than others, and the more ideologically separated two individuals are, the less they interact. Over and above that, there is some additional heterogeneity in how active individuals are: if \(i\) has a lot of interaction with \(j\), it is likely that \(i\) has more with \(j'\). Could this mean a form of preferential attachment?
We can try seeing whether there is some friend of a friend effect above and beyond that. This can be done by fitting a model with transitivity and seeing whether the coefficient is significant, or we can perform a simulation test. In the following
simulate
unpacks the zach.pois
ERGM fit, extracting the formula, the coefficient, and the rest of the information.nsim
says how many networks to generate.output="stats"
says that we only want to see the simulated statistics, not the networks.monitor=~transitiveweights("geomean","sum","geomean")
says that in addition to the statistics used in the fit, we want simulate
to keep track of the transitive weights statistic.We do not need to worry about degeneracy in this case, because we are not actually using that statistic in the model, only “monitoring” it.
# Simulate from model fit:
zach.sim <-
simulate(zach.pois, monitor=~transitiveweights("geomean","sum","geomean"),
nsim = 1000, output="stats")
# What have we simulated?
colnames(zach.sim)
## [1] "sum"
## [2] "nonzero"
## [3] "nodefactor.sum.leader.TRUE"
## [4] "absdiff.sum.faction.id.1"
## [5] "absdiff.sum.faction.id.2"
## [6] "absdiff.sum.faction.id.3"
## [7] "absdiff.sum.faction.id.4"
## [8] "nodecovar"
## [9] "transitiveweights.geomean.sum.geomean"
# How high is the transitiveweights statistic in the observed network?
zach.obs <- summary(zach ~ transitiveweights("geomean","sum","geomean"),
response="contexts")
zach.obs
## transitiveweights.geomean.sum.geomean
## 288.9793
Let’s plot the density of the simulated values of transitive weights statistic:
par(mar=c(5, 4, 4, 2) + 0.1)
# 9th col. = transitiveweights
plot(density(zach.sim[,9]))
abline(v=zach.obs)
# Where does the observed value lie in the simulated?
# This is a p-value for the Monte-Carlo test:
min(mean(zach.sim[,9]>zach.obs), mean(zach.sim[,9]<zach.obs))*2
## [1] 0.566
Looks like individual heterogeneity and faction alignment account for appearance of triadic effects. (Notably, the factions themselves may be endogenous, if social influence is a factor. Untangling selection from influence is hard enough when dynamic network data are available. We cannot do it here.)
NA
) edges are handled automatically for valued ERGMs (as they are for regular ERGMs).ergm
has an argument eval.loglik
, which is TRUE
by default. For valued ERGMs, it’s quite a bit less efficient than for binary, at least for now. So, unless you need the AICs or BICs to compare models, and especially if your networks are not small, pass eval.loglik=FALSE
.Butts, Carter T. 2008. “Network: A Package for Managing Relational Data in R.” Journal of Statistical Software 24 (2): 1–36. https://doi.org/10.18637/jss.v024.i02.
Krivitsky, Pavel N. 2012. “Exponential-Family Random Graph Models for Valued Networks.” Electronic Journal of Statistics 6: 1100–1128. https://doi.org/10.1214/12-EJS696.