Start with the necessary packages and seed for the vignette.
<- c("broom", "geojsonio", "ggmap", "ggplot2", "grDevices", "maptools", "raster", "rgdal", "rgeos", "sf", "sp", "sparrpowR", "spatstat.geom", "tibble")
loadedPackages invisible(lapply(loadedPackages, library, character.only = TRUE))
set.seed(1234) # for reproducibility
Import data from Open Data DC website.
# Washington, D.C. boundary
<- "https://opendata.arcgis.com/datasets/7241f6d500b44288ad983f0942b39663_10.geojson"
gis_path1 <- geojsonio::geojson_read(gis_path1, what = "sp")
dc
# American Community Survey 2018 Census Tracts
<- "https://opendata.arcgis.com/datasets/faea4d66e7134e57bf8566197f25b3a8_0.geojson"
gis_path2 <- geojsonio::geojson_read(gis_path2, what = "sp") census
We want to create a realistic boundary (i.e., polygon) of our study area. We are going to spatially clip our DC boundary by the census tracts in an attempt to remove major bodies of water where people do not reside.
<- maptools::unionSpatialPolygons(census, IDs = rep(1, length(census)))
clipwin <- rgeos::gIntersection(dc, clipwin, byid = TRUE)
dcc # Plot
::plot(dc, main = "DC Boundary")
sp::plot(census, main = "American Community Survey\n2018")
sp::plot(dcc, main = "Clipped Boundary") sp
Our developed method, sparrpowR, relies on the spatstat package suite to simulate data, which assumes point locations are on a planar (i.e., flat) surface. Our boundary is made up of geographical coordinates on Earth (i.e., a sphere), so we need to flatten our boundary by spatially projecting it with an appropriate spatial reference system (SRS). For the District of Columbia, we are going to use the World Geodetic System 1984 (WGS84) Universal Transverse Mercator (UTM) Zone 18N EPSG:32619. We then convert the boundary into a owin
object required by the spatstat.geom package.
<- sf::st_transform(sf::st_as_sf(dcc), crs = st_crs(32618))
dcp <- spatstat.geom::as.owin(sf::st_geometry(dcp)) dco
In this hypothetical example, we want to estimate the local power of detecting a spatial case cluster relative to control locations. Study participants that tested positive for a disease (i.e., cases) are hypothesized to be located in a circular area around the Navy Yard, an Environmental Protection Agency (EPA) Superfund Site (see the success story).
<- data.frame(lon = 326414.70444451, lat = 4304571.1539442)
navy <- sp::SpatialPoints(coords = navy, proj4string = sp::CRS(SRS_string = "EPSG:32618"))
spf # Plot
::plot(dcp, main = "Location of Hypothetical\nDisease Cluster")
sp::plot(spf, col = "magenta", add = T, pch = 4, cex = 2)
splegend("bottom", xpd = T, y.intersp = -1.5, legend = c("Navy Yard"), col = "magenta", pch = 4, cex = 1, bty = "n")
We will assume that approximately 50 cases (e.g., n_case = 50
) were clustered around the center of the Navy Yard (e.g., samp_case = "MVN"
) with more cases near the center and fewer cases about 1 kilometers away (e.g., s_case = 1000
).
If we were to conduct a study, where would we be sufficiently statistically powered to detect this spatial case cluster? To answer this question we will randomly sample approximately 950 participants (e.g., n_conrol = 950
or 5% disease prevalence) around the Navy Yard (e.g., samp_control = "MVN"
) sampling more participants near the center and fewer participants about 2 kilometers away (e.g., s_control = 2000
). These participants would test negative for a disease (i.e., controls) were we to conduct a study. We can then resample control locations iteratively, as if we conducted the same study multiple times (e.g., sim_total = 100
). We can conclude that we are sufficiently powered to detect a spatial clustering in areas where a statistically significant spatial case cluster was located in at least 80% (e.g., p_thresh = 0.8
) of these theoretical studies. The spatial_power()
function calculates both a one-tailed, lower tailed hypothesis (i.e., case clustering only) and a two-tailed hypothesis (i.e., case and control clustering). Use the cascon
argument in the spatial_plots()
function to plot either test.
<- Sys.time() # record start time
start_time <- spatial_power(x_case = navy[[1]], y_case = navy[[2]], # center of cluster
sim_power x_control = navy[[1]], y_control = navy[[2]], # center of cluster
n_case = 50, n_control = 950, # sample size of case/control
samp_case = "MVN", samp_control = "MVN", # samplers
s_case = 1000, s_control = 2000, # approximate size of clusters
alpha = 0.05, # critical p-value
sim_total = 100, # number of iterations
win = dco, # study area
resolution = 100, # number gridded knots on x-axis
edge = "diggle", # correct for edge effects
adapt = FALSE, # fixed-bandwidth
h0 = NULL, # automatically select bandwidth for each iteration
verbose = FALSE) # no printout
<- Sys.time() # record end time
end_time <- end_time - start_time # Calculate run time time_srr
The process above took about 18.9 minutes to run. Of the 100 iterations, we simulated 40 case locations and an average 766 (SD: 11.61) control locations for an average prevalence of 5.22% (SD: 0.08%). The average bandwidth for the statistic was 0.8 kilometers (SD: 0.01). Fewer case locations and controls locations were simulated than specified in the inputs due to being placed outside of our study window (i.e., Maryland, Virginia, or in the water features around the District of Columbia). Users can modify their inputs to achieve the correct number of cases and controls in their output.
We plot the statistical power for a one-tailed, lower-tail hypothesis (cascon = FALSE
) at alpha = 0.05
using the spatial_plots()
function.
<- c("deepskyblue", "springgreen", "red", "navyblue") # colors for plots
cols <- c(4,5) # symbols for point-locations
chars <- c(0.5,0.5) # size of point-locations
sizes <- 0.8 # 80% of iterations with statistically significant results
p_thresh
## Data Visualization of Input and Power
spatial_plots(input = sim_power, # use output of above simulation
p_thresh = p_thresh, # power cut-off
cascon = FALSE, # one-tail, lower tail hypothesis test (i.e., case clustering)
plot_pts = TRUE, # display the points in the second plot
chars = chars, # case, control
sizes = sizes, # case, control
cols = cols) # colors of plot
Now, lets overlay our results on top of a basemap. Here, we will use an open-source map from Stamen, that is unprojected in WGS84. We extract the rectangular box (i.e., bounding box) surrounding our polygon boundary of the District of Columbia (WGS84).
<- sf::st_bbox(sf::st_buffer(sf::st_as_sf(dc), dist = 0.015))
dcbb <- matrix(dcbb, nrow = 2)
dcbbm <- ggmap::get_map(location = dcbbm, maptype = "toner", source = "stamen") base_map
Prepare the points from the first simulation for plotting in ggplot2 suite and prepare the original boundary for plotting in ggplot2 suite.
<- sim_power$sim # extract points from first iteration
sim_pts <- maptools::as.SpatialPointsDataFrame.ppp(sim_pts) # convert to spatial data frame
sim_pts ::crs(sim_pts) <- sp::CRS(SRS_string = "EPSG:32618") # set initial projection
raster<- sp::spTransform(sim_pts, CRSobj = sp::CRS(SRS_string = "EPSG:4326")) # project to basemap
sim_pts_wgs84 <- tibble::tibble(data.frame(sim_pts_wgs84)) # convert to tidy data frame
sim_pts_df
# Original boundary
<- broom::tidy(dcc) # convert to a tidy dataframe
dc_df $polyID <- sapply(slot(dcc, "polygons"), function(x) slot(x, "ID")) # preserve polygon id for merge
dcc<- merge(dc_df, dcc, by.x = "id", by.y="polyID") # merge data dc_df
Prepare the raster from the simulation for plotting in ggplot2 suite.
<- tibble::tibble(x = sim_power$rx, y = sim_power$ry,
pvalprop z = sim_power$pval_prop_cas) # extract proportion significant
<- na.omit(pvalprop) # remove NAs
lrr_narm ::coordinates(lrr_narm) <- ~ x + y # coordinates
spgridded(lrr_narm) <- TRUE # gridded
<- raster::raster(lrr_narm) # convert to raster
pvalprop_raster rm(pvalprop, lrr_narm) # conserve memory
::crs(pvalprop_raster) <- raster::crs(dcp) # set output project (UTM 18N)
raster<- raster::projectRaster(pvalprop_raster, crs = raster::crs(dc)) # unproject (WGS84)
pvalprop_raster <- raster::rasterToPolygons(pvalprop_raster) # convert to polygons
rtp @data$id <- 1:nrow(rtp@data) # add id column for join
rtp<- broom::tidy(rtp, data = rtp@data) # convert to tibble
rtpFort <- merge(rtpFort, rtp@data, by.x = 'id', by.y = 'id') # join data
rtpFortMer <- grDevices::colorRampPalette(colors = c(cols[1], cols[2]), space="Lab")(length(raster::values(pvalprop_raster))) # set colorramp rampcols
Plot local power as a continuous outcome with point-locations using the ggplot2 suite.
::ggmap(base_map) + # basemap
ggmap::geom_polygon(data = dc_df, # original boundary
ggplot2::aes(x = long, y = lat, group = group),
ggplot2fill = "transparent",
colour = "black") +
::geom_polygon(data = rtpFortMer, # output raster as polygons
ggplot2::aes(x = long, y = lat, group = group, fill = z),
ggplot2size = 0,
alpha = 0.5) +
::scale_fill_gradientn(colours = rampcols) + # colors for polygons
ggplot2::geom_point(data = sim_pts_df, # simulated point-locations
ggplot2::aes(x = mx, y = my, color = marks, shape = marks),
ggplot2alpha = 0.8) +
::scale_color_manual(values = cols[3:4]) + # fill of point-locations
ggplot2::scale_shape_manual(values = chars) + # shape of point-locations
ggplot2::labs(x = "", y = "", fill = "Power", color = "", shape = "") # legend labels ggplot2
Plot local power as a categorical outcome with point-locations using the ggplot2 suite.
<- raster::cut(pvalprop_raster, c(-Inf, p_thresh, Inf))
pvalprop_reclass <- raster::rasterToPolygons(pvalprop_reclass) # convert to polygons
rtp @data$id <- 1:nrow(rtp@data) # add id column for join
rtp<- broom::tidy(rtp, data = rtp@data) # convert to tibble
rtpFort <- merge(rtpFort, rtp@data, by.x = 'id', by.y = 'id') # join data
rtpFortMer
::ggmap(base_map) + # basemap
ggmap::geom_polygon(data = dc_df, # original boundary
ggplot2::aes(x = long, y = lat, group = group),
ggplot2fill = "transparent",
colour = "black") +
::geom_polygon(data = rtpFortMer, # output raster as polygons
ggplot2::aes(x = long, y = lat, group = group, fill = as.factor(layer)),
ggplot2size = 0,
alpha = 0.5) +
::scale_fill_manual(values = cols[c(1,2)],
ggplot2labels = c("insufficient", "sufficient")) + # colors for polygons
::labs(x = "", y = "", fill = "Power") # legend labels ggplot2
Based on 100 iterations of multivariate normal sampling of approximately 766 control participants focused around the Navy Yard, we are sufficiently powered to detect the disease cluster in the Navy Yard area.
We provide functionality to run the spatial_power()
with parallel processing to speed up computation (parallel = TRUE
). Parallelization is accomplished with the doFuture package, the future::multisession plan, and the %dorng% operator for the foreach package to produce reproducible results. (Note: simpler windows, such as unit circles, require substantially less computational resources.)
We also provide functionality to correct for multiple testing. A hypothesis is tested at each gridded knot and the tests are spatially correlated by nature. With the p_correct
argument you can choose a multiple testing correction. The most conservative, p_correct = "Bonferroni"
and p_correct = "Sidak"
, apply corrections that assumes independent tests, which are likely not appropriate for this setting but we include to allow for sensitivity tests. The p_correct = "FDR"
applies a False Discovery Rate for the critical p-value that is not as conservative as the other two options.
Here, we use the same example as above, conducted in parallel with a False Discovery Rate procedure.
set.seed(1234) # reset RNG for reproducibility with previous run
<- Sys.time() # record start time
start_time <- spatial_power(x_case = navy[[1]], y_case = navy[[2]], # center of cluster
sim_power x_control = navy[[1]], y_control = navy[[2]], # center of cluster
n_case = 50, n_control = 950, # sample size of case/control
samp_case = "MVN", samp_control = "MVN", # samplers
s_case = 1000, s_control = 2000, # approximate size of clusters
alpha = 0.05, # critical p-value
sim_total = 100, # number of iterations
win = dco, # study area
resolution = 100, # number gridded knots on x-axis
edge = "diggle", # correct for edge effects
adapt = FALSE, # fixed-bandwidth
h0 = NULL, # automatically select bandwidth for each iteration
verbose = FALSE, # no printout
parallel = TRUE, # Run in parallel
n_core = 5, # Use 5 cores (depends on your system, default = 2)
p_correct = "FDR") # use a correction for multiple testing (False Discovery Rate)
<- Sys.time() # record end time
end_time <- end_time - start_time # Calculate run time
time_srr
<- c("deepskyblue", "springgreen", "red", "navyblue") # colors for plots
cols <- c(4,5) # symbols for point-locations
chars <- c(0.5,0.5) # size of point-locations
sizes <- 0.8 # 80% of iterations with statistically significant results
p_thresh
## Data Visualization of Input and Power
spatial_plots(input = sim_power, # use output of above simulation
p_thresh = p_thresh, # power cut-off
cascon = FALSE, # one-tail, lower tail hypothesis test (i.e., case clustering)
plot_pts = FALSE, # display the points in the second plot
chars = chars, # case, control
sizes = sizes, # case, control
cols = cols) # colors of plot
The process above took about 6.8 minutes to run, which is shorter than the first example. The zone with sufficient power to detect a case cluster is slightly smaller than the first example, too, due to the multiple testing correction.