multiScaleR User Guide

Bill Peterman

1 Background

1.1 The scale-of-effect problem

Many abiotic and ecological processes are shaped by variables that operate at multiple spatial scales. A central challenge in landscape ecology is determining the scale of effect: the spatial extent over which a landscape variable influences a biological response measured at a sampling location.

Consider a practical example. You are modeling songbird abundance across a forested region, and you suspect that the proportion of forest cover in the surrounding landscape matters. But how large should that surrounding area be? Should you measure forest cover within 100 m of each survey point? 500 m? 2 km? The answer is not known in advance. It depends on how far an individual moves across the landscape, how far away habitat features can affect local conditions, and on the strength of the biological relationship itself.

Traditional approaches test a fixed set of candidate buffer radii, fit a separate model for each, and compare them. This approach has two key limitations. First, the investigator must guess plausible scales in advance, and the true scale might fall between the tested values. Second, a fixed-width buffer treats every landscape cell within the radius equally, as if a forest patch 50 m away matters just as much as one at 499 m.

multiScaleR addresses both limitations by representing landscape influence as a distance-weighted kernel: a weighting function that assigns higher weight to nearby landscape cells and smoothly decreasing weight to cells farther away. The width of that kernel, σ (sigma, in map units), controls how rapidly influence decays with distance. Crucially, σ is estimated from the data as a parameter of the statistical model, alongside the familiar regression coefficients. This means you do not need to pre-specify the scale; the data identify it for you.

1.2 What multiScaleR does

With advanced coding skills, Bayesian analytical frameworks can be used to estimate scales of effect (e.g., Stuber & Gruber 2020; Amirkhiz et al. 2023). More recently, user-friendly R packages have been developed: Siland (Carpentier & Martin 2021) and Scalescape (Lowe et al. 2022) both estimate distance-weighted landscape effects. multiScaleR extends this family of tools with several practical advances:

multiScaleR is fully supported for models of class glm, glm.nb, nlme (including gls), lme4, and unmarked. It should work with any model class that has an update function and is a supported class in the R package insight. Let me know if you encounter issues or would like to see support for other model classes.

References


2 A Conceptual Road Map

Before writing any code, it helps to see how the pieces fit together. The core workflow has four stages:

[Optional: simulate data]       [Your field data]
  sim_rast()                  Spatial Point Data
  sim_dat()                   (GPS coordinates)
  sim_dat_unmarked()               +
        |                    Raster Layers
        |                  (habitat covariates)
        |                          |
        +--------> kernel_prep() <-+
                        |
                        |   Computes distance-weighted raster
                        |   values at each sample point;
                        |   output feeds directly into model
                        v
             [Fit initial regression model]
              glm() / lme4::lmer() / gls()
              pscl::zeroinfl() / unmarked / ...
                        |
                        v
              multiScale_optim()
              (simultaneously estimates sigma for
               each raster layer and regression
               coefficients via maximum likelihood)
                        |
         +--------------+--------------+-----------+
         |              |              |            |
      summary()      plot()    plot_marginal_   aic_tab()
      (sigma CIs,    (kernel   effects()        bic_tab()
       regression    weight    (covariate
       output)       decay)    effect plots)
         |
         v
   kernel_scale.raster()
   (apply optimized kernel to full raster extent;
    optionally scale/center and clamp values)
         |
         v
   terra::predict()
   (spatially explicit predictions across the landscape)

The key function is multiScale_optim(). It takes your fitted model and the kernel_prep output, then optimizes the kernel width (σ) for each spatial covariate. Everything else flows from that fitted object: model summaries, plots, model selection, and spatial projection.

Key concepts to keep in mind:

3 Distance-weighted Effects

3.1 Choosing a kernel

There are many different kernel transformations that could be used to quantify the contribution of environmental variables as a function of distance. The kernel function determines the shape of that decay: how quickly nearby cells lose influence relative to the focal point.

Four kernel functions are implemented in multiScaleR. In each case, σ (sigma) is estimated during optimization and controls the spatial scale (width) of the kernel. All plots below show the weight assigned to a landscape cell as a function of its distance from a sample point, with a vertical line marking the distance that captures 90% of the total weight.

The Gaussian kernel is the default and performs well across a wide range of ecological scenarios. When in doubt, start with it.

Note: Figures in this document may render at lower quality than running code locally.

  1. Gaussian (gaus): Decay in space governed by a single parameter (sigma)

  2. Negative Exponential (exp): Decay in space governed by a single parameter (sigma)

  3. Exponential Power (expow): Decay in space governed by a scale parameter (sigma) and a shape parameter (beta)

  4. Fixed width buffer (fixed): Effect does not decay with distance

4 Preparing Data

Prior to using multiScaleR to identify distance-weighted scales of effect, data must be appropriately formatted. These steps will be demonstrated with simulated data provided with the package.

library(multiScaleR)

## Read in data
data("landscape_counts")
dat <- landscape_counts

data("surv_pts")
pts <- vect(surv_pts)

land_rast <- terra::rast(system.file("extdata", 
                                     "landscape.tif", 
                                     package = 'multiScaleR'))

The landscape_counts data frame contains simulated counts from 100 spatial point locations as well as a scaled and centered site-level covariate (‘site’). The landscape.tif file is a spatRaster object consisting of three surfaces (land1 = binary habitat; land2 = continuous habitat / environment; land3 = continuous / environment). The counts were simulated using the following parameters:

4.1 Explore Data

summary(dat)
##      counts        site        
##  Min.   : 0   Min.   :-1.7129  
##  1st Qu.: 0   1st Qu.:-0.6480  
##  Median : 1   Median :-0.1697  
##  Mean   : 2   Mean   : 0.0000  
##  3rd Qu.: 3   3rd Qu.: 0.5189  
##  Max.   :10   Max.   : 2.9984
pts
##  class       : SpatVector 
##  geometry    : points 
##  dimensions  : 100, 1  (geometries, attributes)
##  extent      : 755.3902, 3552.415, 755.3902, 3552.415  (xmin, xmax, ymin, ymax)
##  coord. ref. :  
##  names       :   obs
##  type        : <int>
##  values      :     0
##                    3
##                    7
land_rast
## class       : SpatRaster 
## dimensions  : 200, 200, 3  (nrow, ncol, nlyr)
## resolution  : 20, 20  (x, y)
## extent      : 0, 4000, 0, 4000  (xmin, xmax, ymin, ymax)
## coord. ref. :  
## source      : landscape.tif 
## names       : land1, land2, land3 
## min values  :     0,     0,     0 
## max values  :     1,     1,     1
plot(land_rast)

## Plot with points
plot(land_rast$land1) 
plot(pts, add = T, pch = 19)

4.2 kernel_prep

kernel_prep() is the first step of every multiScaleR analysis. It does two things:

  1. For each sample point, it extracts raster values at every cell within max_D (the maximum search radius) and computes their distances. These are stored as sparse matrices for efficiency.
  2. It computes an initial set of distance-weighted covariate values at each sample point using a starting σ (by default, max_D / 3). These initial values are used to fit the starting regression model.

Choosing max_D: Set max_D to a value you are confident exceeds the true scale of effect. If you are uncertain, err on the side of being too large; the optimization will find the right σ within that range. The tradeoff is computational: larger max_D means more cells to process, so keep it reasonable. If multiScale_optim() warns that a σ estimate is near max_D, increase max_D and re-run kernel_prep().

kernel_inputs <- kernel_prep(pts = pts,
                             raster_stack = land_rast,
                             max_D = 1700,
                             kernel = 'gaussian',
                             verbose = FALSE)
kernel_inputs
## 
## There are 100 observations at 3 spatial covariate(s): 
## land1 land2 land3
## 
## The specified kernel is:
## gaussian
## 
## Number of elements: 
## 13111
## Minimum Distance:
## 20
## Maximum Distance:
## 1700
## Unit Conversion:
## 1700

The kernel_inputs object holds the distance data and the initial kernel-weighted covariate values (in kernel_inputs$kernel_dat). These kernel-weighted values are scaled and centered (mean 0, SD 1 across sample points) so that regression coefficients are comparable across covariates.

Next, combine the kernel-weighted covariate values with the response data to build the data frame for model fitting:

df <- data.frame(dat,
                 kernel_inputs$kernel_dat)
str(df)
## 'data.frame':    100 obs. of  5 variables:
##  $ counts: int  0 2 7 3 2 0 2 1 1 1 ...
##  $ site  : num  0.992 1.992 0.3 -0.623 1.891 ...
##  $ land1 : num  0.959 0.565 -0.132 -0.73 1.238 ...
##  $ land2 : num  -0.15 0.726 0.513 0.623 0.958 ...
##  $ land3 : num  0.6937 0.9235 0.455 -0.0789 1.0492 ...

Fit the initial (starting) model. This model uses the initial kernel-weighted values and establishes the model structure (formula, family, random effects) that will be inherited by multiScale_optim(). The coefficients from this model are only starting values; optimizing with multiScale_optim() will update both the regression coefficients and σ simultaneously.

mod0 <- glm(counts ~ site + land1 + land2 + land3,
            family = poisson(),
            data = df)
summary(mod0)
## 
## Call:
## glm(formula = counts ~ site + land1 + land2 + land3, family = poisson(), 
##     data = df)
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  0.52794    0.08307   6.355 2.08e-10 ***
## site        -0.02191    0.11116  -0.197    0.844    
## land1       -0.38930    0.08881  -4.383 1.17e-05 ***
## land2        0.77890    0.14288   5.451 5.00e-08 ***
## land3        0.09565    0.13016   0.735    0.462    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 224.24  on 99  degrees of freedom
## Residual deviance: 165.40  on 95  degrees of freedom
## AIC: 372.08
## 
## Number of Fisher Scoring iterations: 5

5 Analysis

5.1 multiScale_optim

We are now ready to optimize the scales of effect with multiScale_optim(). This function searches over σ values for each raster layer (and simultaneously updates regression coefficients) to find the combination that maximizes the likelihood of the fitted model.

Parallelization: For this vignette, the function is run without parallelization. In practice, optimization will be substantially faster when you specify n_cores (e.g., n_cores = 4). The optimization below takes approximately 50 seconds on a single core.

opt1 <- multiScale_optim(fitted_mod = mod0,
                         kernel_inputs = kernel_inputs)

We get two helpful warning messages:

  1. The estimated σ for one raster layer is near max_D, meaning the optimization is pushing against the boundary of the search space. This suggests that either the true scale of effect is large, or that the variable has a weak or absent relationship with the response and optimization is poorly constrained. The warning suggests increasing max_D in kernel_prep().
  2. The standard error for one or more σ estimates is large relative to the point estimate: a sign of imprecision, often caused by weak signal or too small a sample size.

Let’s look at our results.

summary(opt1)
## 
## Call:
## multiScale_optim(fitted_mod = mod0, kernel_inputs = kernel_inputs, 
##     n_cores = 5)
## 
## 
## Kernel used:
## gaussian
## 
## ***** Optimized Scale of Effect -- Sigma *****
## 
##           Mean        SE     2.5%     97.5%
## land1 248.1714  34.19032 180.2951  316.0478
## land2 538.4724  50.76299 437.6951  639.2496
## land3 538.9632 309.97857  20.0000 1154.3485
## 
## 
##   ==================================== 
## 
## ***** Optimized Scale of Effect -- Distance *****
## 90% Kernel Weight
## 
##         Mean   2.5%   97.5%
## land1 408.21 296.56  519.85
## land2 885.71 719.94 1051.47
## land3 886.52  32.90 1898.73
## 
## 
##   ==================================== 
## 
##  *****     Fitted Model Summary     *****
## 
## 
## Call:
## glm(formula = counts ~ site + land1 + land2 + land3, family = poisson(), 
##     data = dat)
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  0.37648    0.09260   4.066 4.79e-05 ***
## site         0.06417    0.09891   0.649   0.5165    
## land1       -0.50803    0.08013  -6.340 2.30e-10 ***
## land2        0.75707    0.09939   7.617 2.60e-14 ***
## land3        0.18076    0.09994   1.809   0.0705 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 224.24  on 99  degrees of freedom
## Residual deviance: 106.99  on 95  degrees of freedom
## AIC: 313.66
## 
## Number of Fisher Scoring iterations: 5
## 
## 
##  WARNING!!!
##  The estimated scale of effect extends beyond the maximum distance specified. 
##  Consider increasing max_D in `kernel_prep` to ensure accurate estimation of scale.
## 
## 
##  WARNING!!!
##  The standard error of one or more `sigma` estimates is >= 50% of the estimated mean value.
##  Carefully assess whether or not this variable is meaningful in your analysis and interpret with caution.

How to read the summary() output:

The top of the output shows the estimated σ (scale parameter) for each raster layer, with standard errors and 95% confidence intervals. Below each σ estimate is the effective distance: the distance that captures a specified proportion (default: 90%) of the total kernel weight. This converts σ into an ecologically interpretable distance in map units (meters in this case). A Gaussian kernel with σ = 250 has 90% of its cumulative weight within approximately 412 m of the focal point.

The bottom of the output is the standard model summary from your regression (GLM in this case), showing regression coefficients, standard errors, and p-values for each covariate, exactly as you would see from summary(glm(...)).

Confidence intervals on σ: delta method vs. profile likelihood

By default, confidence intervals on σ are computed using the delta method: a first-order Taylor approximation that treats the log-likelihood surface as locally quadratic near the optimum and derives the CI from the curvature (Hessian) of that surface. Delta-method CIs are fast to compute and work well when the likelihood surface is smooth and roughly symmetric around the estimate. They are reported alongside the standard error in the default summary() output.

However, the log-likelihood surface for σ is often asymmetric, especially when the scale parameter is near max_D, when sample sizes are small, or when the effect size is weak. In these situations the delta-method CI can be misleading: the lower and upper bounds may not accurately reflect the true uncertainty in each direction. Profile-likelihood CIs address this by tracing the likelihood surface directly: for each boundary value of σ, the optimizer finds the configuration of all other parameters that maximizes the likelihood, then identifies where that profile drops by the critical chi-squared quantile (1.92 log-likelihood units for a 95% CI). Profile CIs are therefore more reliable when the surface is irregular, at the cost of additional computation.

To request profile-likelihood CIs, pass profile = TRUE to summary():

summary(opt2, profile = TRUE)

Profile CIs are recommended whenever:

Wide CIs on σ in either method indicate that the data do not strongly constrain the scale of effect for that variable. This may mean the variable has a weak relationship with the response, the sample size is insufficient, or the variable should be excluded from the model.

For our full model: site has no apparent effect, land1 has a negative effect, land2 has a positive effect, and land3 shows a weak positive effect. Crucially, the σ for land3 is large and imprecisely estimated (wide CI). This is a red flag: when a variable has no true relationship with the response, the optimization often fails to find a well-defined σ and will wander toward large values.

With this simulated data, we know that land3 has no effect on counts. Let’s fit a model without it.

## New model
mod0_2 <- glm(counts ~ site + land1 + land2,
              family = poisson(),
              data = df)
## Optimize
opt2 <- multiScale_optim(fitted_mod = mod0_2,
                         kernel_inputs = kernel_inputs)

No warnings! Let’s look at our results.

summary(opt2)
## 
## Call:
## multiScale_optim(fitted_mod = mod0_2, kernel_inputs = kernel_inputs, 
##     n_cores = 5)
## 
## 
## Kernel used:
## gaussian
## 
## ***** Optimized Scale of Effect -- Sigma *****
## 
##           Mean       SE     2.5%    97.5%
## land1 238.4505 33.14563 172.6569 304.2441
## land2 499.5337 47.47022 405.3061 593.7614
## 
## 
##   ==================================== 
## 
## ***** Optimized Scale of Effect -- Distance *****
## 90% Kernel Weight
## 
##         Mean   2.5%  97.5%
## land1 392.22 284.00 500.44
## land2 821.66 666.67 976.65
## 
## 
##   ==================================== 
## 
##  *****     Fitted Model Summary     *****
## 
## 
## Call:
## glm(formula = counts ~ site + land1 + land2, family = poisson(), 
##     data = dat)
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  0.39293    0.09098   4.319 1.57e-05 ***
## site         0.17476    0.08246   2.119   0.0341 *  
## land1       -0.48620    0.07922  -6.137 8.40e-10 ***
## land2        0.62567    0.07245   8.636  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 224.24  on 99  degrees of freedom
## Residual deviance: 109.57  on 96  degrees of freedom
## AIC: 314.24
## 
## Number of Fisher Scoring iterations: 5

The default summary() uses delta-method confidence intervals. Because this model is well-behaved (no boundary warnings, reasonable sample size), the delta-method CIs should be reliable here. For a more conservative check, you can compare them to profile-likelihood CIs:

summary(opt2, profile = TRUE)

The true data-generating values were (see section 3):

Parameter True value Recovered?
land1 σ 250 m Check CI
land2 σ 500 m Check CI
land1 β −0.5 Check CI
land2 β 0.7 Check CI

Overall, we do a good job recovering both the regression coefficients (β) and the scales of effect (σ) within reasonable uncertainty bounds. This is reassuring but also a good reminder: with real data, you will not have ground truth to check against. Careful model selection and biological reasoning are your guides.

Visualizing the kernel decay. The plot() method shows how the influence of each landscape variable falls off with distance. The vertical line marks the distance enclosing 90% of the cumulative kernel weight (the effective distance).

## Kernel function
plot(opt2)

## Kernel function; 99% contribution
plot(opt2, prob = 0.99)

Visualizing marginal covariate effects. plot_marginal_effects() shows the estimated effect of each covariate on the response, holding other covariates at their mean values. These are the standard partial effect plots from the regression model, applied to the optimally-scaled covariate values.

plot_marginal_effects(opt2)

Profiling the likelihood surface. profile_sigma() provides a diagnostic view of how the model fit changes as σ varies across its search range for each covariate, while all other σ values are held at their optimized values. Rather than reporting a single point estimate with an interval, it shows the full shape of the AICc (or log-likelihood) surface across the parameter space.

This is useful in several situations:

Intervals are log-spaced so that evaluation is denser at smaller σ values, where the surface tends to change more rapidly, and sparser at larger values where it is typically flatter.

prof2 <- profile_sigma(opt2, n_pts = 15, verbose = FALSE)
plot(prof2)

AICc profile across sigma for each spatial covariate. The red dashed line marks the optimized sigma.AICc profile across sigma for each spatial covariate. The red dashed line marks the optimized sigma.

For opt2, the profile confirms a clear, well-defined minimum in AICc near the optimized σ for both land1 and land2, consistent with the absence of boundary warnings and the narrow confidence intervals reported by summary(). Compare this to the full model (opt1), which included the uninformative land3 covariate: that profile would show a shallow or absent trough, reflecting the poor identifiability of σ when a variable has no true relationship with the response.

We can also apply the optimized kernel to the full raster surfaces. This is a key step if you want to generate spatially explicit predictions across the landscape.

kernel_scale.raster() smooths each raster layer using the estimated σ from the optimization. When scale_center = TRUE, the smoothed values are standardized using the mean and SD from the sample points, matching the scaling used during model fitting. This ensures that terra::predict() receives covariate values in the same space as the training data.

rast_opt <- kernel_scale.raster(raster_stack = land_rast,
                                multiScaleR = opt2)
## 
## Smoothing spatRaster 1 of 2: land1 at sigma = 238
## 
## Smoothing spatRaster 2 of 2: land2 at sigma = 499
plot(rast_opt)

Note: Because site is a non-spatial covariate (a site-level variable, not a raster), a constant dummy raster is created for it. This allows the spatRaster object to be passed directly to terra::predict(), which requires all model covariates to be present. See the Spatial Projection and Clamping vignette for a detailed treatment of projection, including how to handle cases where projected covariate values fall outside the range of the training data.

5.2 Kernels

Data were simulated from a Gaussian kernel, but we can optimize using other kernels and compare model performance. Starting values had to be specified when using the exponential power kernel due to convergence issues.

## Negative Exponential
## Note: these kernel_prep objects are pre-loaded in the setup chunk.
## eval=FALSE keeps the vignette from re-running these computationally
## intensive calls during knitting.
exp_inputs <- kernel_prep(pts = pts,
                          raster_stack = land_rast,
                          max_D = 1700,
                          kernel = 'exp',
                          verbose = FALSE)

## Exponential Power
expow_inputs <- kernel_prep(pts = pts,
                            raster_stack = land_rast,
                            max_D = 1700,
                            kernel = 'expow',
                            verbose = FALSE)

## Fixed width buffer
fixed_inputs <- kernel_prep(pts = pts,
                            raster_stack = land_rast,
                            max_D = 1700,
                            kernel = 'fixed',
                            verbose = FALSE)

Optimize model with alternative kernels

opt_exp <- multiScale_optim(fitted_mod = mod0_2,
                            kernel_inputs = exp_inputs)

## Starting values needed
opt_expow <- multiScale_optim(fitted_mod = mod0_2,
                              kernel_inputs = expow_inputs,
                              par = c(500/expow_inputs$unit_conv,
                                      500/exp_inputs$unit_conv,
                                      5,10))

opt_fixed <- multiScale_optim(fitted_mod = mod0_2,
                              kernel_inputs = fixed_inputs)

If you explore each of the outputs, you’ll see that we generally arrive at similar results. There are some warnings, but we won’t worry about those right now. Next, we’ll try to compare the relative support of these models using different weighted kernels.

6 Model Selection

6.1 Why standard AIC is not enough

When you optimize σ, you are estimating an additional parameter beyond the regression coefficients. Standard AIC from the base model does not account for this extra complexity. multiScaleR uses AICc (corrected for small samples) and BIC tables that account for the σ parameters in the penalty term. This is important: comparing models with aic_tab() or bic_tab() gives a fair comparison across models with different numbers of spatial covariates.

With multiScaleR it is possible to create AIC(c) or BIC tables from lists of fitted models.

mod_list <- list(opt2,
                 opt_exp,
                 opt_expow,
                 opt_fixed)

## AIC table
aic_tab(mod_list)
## 
## Model selection based on AICc:
## 
##                                 K   AICc Delta_AICc AICcWt Cum.Wt      LL
## [expow]~site + land1 + land2    6 316.27       0.00   0.47   0.47 -151.68
## [gaussian]~site + land1 + land2 5 316.88       0.61   0.34   0.81 -153.12
## [fixed]~site + land1 + land2    5 318.10       1.83   0.19   1.00 -153.73
## [exp]~site + land1 + land2      5 327.39      11.12   0.00   1.00 -158.38
## BIC table
bic_tab(mod_list)
## 
## Model selection based on BIC:
## 
##                                 K    BIC Delta_BIC BICWt Cum.Wt      LL
## [gaussian]~site + land1 + land2 5 329.27      0.00  0.51   0.51 -153.12
## [fixed]~site + land1 + land2    5 330.48      1.22  0.28   0.78 -153.73
## [expow]~site + land1 + land2    6 331.00      1.73  0.21   1.00 -151.68
## [exp]~site + land1 + land2      5 339.78     10.51  0.00   1.00 -158.38

The questionably-fitting exponential power model is best-supported by AICc ranking while the Gaussian model is best-supported by BIC. Because of the added complexity (extra parameter) with the exponential power kernel, uncertain optimization, and general equivalency to other models, we should probably lean toward the simpler Gaussian kernel model. This example highlights the challenges that may be encountered when trying to parse different kernels for estimating scales of effect. From observations of performance with simulated data, use of different kernels tends to result in a similar final model and interpretation of variable effects. This, in part, is why the Gaussian kernel is the default, and likely will meet most researcher needs.

Of greater interest is using model selection to identify the best supported model in terms of parameterization (not kernel used).

## Landscape only effect
mod0_3 <- glm(counts ~ land1 + land2,
              family = poisson(),
              data = df)

## Landscape 1 only effect
mod0_4 <- glm(counts ~ land1,
              family = poisson(),
              data = df)

## Landscape 2 only effect
mod0_5 <- glm(counts ~ land2,
              family = poisson(),
              data = df)

## Landscape 3 only effect
mod0_6 <- glm(counts ~ land3,
              family = poisson(),
              data = df)

## Site only effect
## No multiScaleR optimization
mod0_7 <- glm(counts ~ site,
              family = poisson(),
              data = df)

Optimize scale for each alternative model

opt3 <- multiScale_optim(fitted_mod = mod0_3,
                         kernel_inputs = kernel_inputs)
opt4 <- multiScale_optim(fitted_mod = mod0_4,
                         kernel_inputs = kernel_inputs)
opt5 <- multiScale_optim(fitted_mod = mod0_5,
                         kernel_inputs = kernel_inputs)
opt6 <- multiScale_optim(fitted_mod = mod0_6,
                         kernel_inputs = kernel_inputs)

Put models into list and assess.

mod_list2 <- list(opt1, opt2, opt3, opt4, opt5, opt6, mod0_7)

aic_tab(mod_list2)
## 
## Model selection based on AICc:
## 
##                                         K   AICc Delta_AICc AICcWt Cum.Wt
## [gaussian]~site + land1 + land2 + land3 6 316.56       0.00   0.46   0.46
## [gaussian]~site + land1 + land2         5 316.88       0.31   0.40   0.86
## [gaussian]~land1 + land2                4 318.96       2.40   0.14   1.00
## [gaussian]~land2                        3 349.41      32.85   0.00   1.00
## [gaussian]~land1                        3 393.73      77.17   0.00   1.00
## [gaussian]~land3                        3 418.64     102.08   0.00   1.00
## [NA]~site                               2 422.96     106.40   0.00   1.00
##                                              LL
## [gaussian]~site + land1 + land2 + land3 -151.83
## [gaussian]~site + land1 + land2         -153.12
## [gaussian]~land1 + land2                -155.27
## [gaussian]~land2                        -171.58
## [gaussian]~land1                        -193.74
## [gaussian]~land3                        -206.20
## [NA]~site                               -209.42
bic_tab(mod_list2)
## 
## Model selection based on BIC:
## 
##                                         K    BIC Delta_BIC BICWt Cum.Wt      LL
## [gaussian]~land1 + land2                4 328.96      0.00  0.46   0.46 -155.27
## [gaussian]~site + land1 + land2         5 329.27      0.31  0.40   0.86 -153.12
## [gaussian]~site + land1 + land2 + land3 6 331.29      2.33  0.14   1.00 -151.83
## [gaussian]~land2                        3 356.98     28.02  0.00   1.00 -171.58
## [gaussian]~land1                        3 401.30     72.34  0.00   1.00 -193.74
## [gaussian]~land3                        3 426.21     97.25  0.00   1.00 -206.20
## [NA]~site                               2 428.05     99.09  0.00   1.00 -209.42

Model selection tables clearly show that the inclusion of scaled landscape variables is important, however information criterion may not have the greatest resolution or power to differentiate among competing models. Analyses will likely require a critical, holistic assessment of information criterion, parameter effect sizes, and precision in scale of effect estimates (i.e. sigma).

The challenge of identifying ‘significant’ scale relationships was also noted by Lowe et al., and the R package scalescape has a bootstrap function to determine the significance of parameters within the regression model. Such procedures are not currently implemented in multiScaleR.

7 Optimization with unmarked

Among the model classes that multiScaleR can be used to optimize are those from unmarked. We will use the simulation functions from the package to create data for this analysis. First, we will simulate and visualize raster surfaces.

rs <- sim_rast(user_seed = 999, dim = 250)
plot(rs)

7.1 Poisson Count Model

We’ll use the bin1 and cont2 surfaces for our data simulation. First, we’ll simulate count data with a Poisson distribution. For the unmarked data simulation, we need to specify the count model intercept (alpha), and regression coefficients that describe the effect of the raster layers (beta), the scale of effect (sigma), number of survey points on the landscape (n_points), the number of replicate surveys conducted (n_surv), the detection probability (det; here, the probability of detecting an individual), and the maximum distance to consider in the analysis (max_D).

rs <- terra::subset(rs, c(1,4))
s_dat <- sim_dat_unmarked(alpha = 1,
                          beta = c(0.75,-0.75),
                          kernel = 'gaussian',
                          sigma = c(75, 150),
                          n_points = 75,
                          n_surv = 5,
                          det = 0.5,
                          type = 'count',
                          raster_stack = rs,
                          max_D = 550,
                          user_seed = 555)
plot(s_dat$df$y ~ s_dat$df$bin1)

plot(s_dat$df$y ~ s_dat$df$cont2)

We can see that we have simulated a positive effect of bin1 on counts and negative effect of cont2 on counts. We can now prepare data for unmarked and optimize scale of effect with multiScaleR.

library(unmarked)
kernel_inputs <- kernel_prep(pts = s_dat$pts,
                             raster_stack = rs,
                             max_D = 550,
                             kernel = 'gaus',
                             verbose = FALSE)

umf <- unmarkedFramePCount(y = s_dat$y,
                           siteCovs = kernel_inputs$kernel_dat)

## Base unmarked model
mod0_umf.p <- unmarked::pcount(~1 ~bin1 + cont2,
                               data = umf, 
                               K = 100)
opt_umf.p <- multiScale_optim(fitted_mod = mod0_umf.p,
                              kernel_inputs = kernel_inputs,
                              n_cores = 8)

Compare optimized results with simulated values. Note that the detection reported by unmarked is on the logit scale, so must be back-transformed. Overall, both the scale parameters and the Poisson count model parameters were accurately recovered.

summary(opt_umf.p)
## 
## Call:
## multiScale_optim(fitted_mod = mod0_umf.p, kernel_inputs = kernel_inputs, 
##     n_cores = 8)
## 
## 
## Kernel used:
## gaussian
## 
## ***** Optimized Scale of Effect -- Sigma *****
## 
##            Mean        SE      2.5%     97.5%
## bin1   51.94359  9.182621  33.65088  70.23631
## cont2 185.79630 37.261669 111.56724 260.02535
## 
## 
##   ==================================== 
## 
## ***** Optimized Scale of Effect -- Distance *****
## 90% Kernel Weight
## 
##         Mean   2.5%  97.5%
## bin1   85.44  55.35 115.53
## cont2 305.61 183.51 427.70
## 
## 
##   ==================================== 
## 
##  *****     Fitted Model Summary     *****
## 
## 
## Call:
## unmarked::pcount(formula = ~1 ~ bin1 + cont2, data = inputs$data, 
##     K = 100, mixture = "P")
## 
## Abundance (log-scale):
##             Estimate     SE     z  P(>|z|)
## (Intercept)    1.003 0.0974 10.30 7.38e-25
## bin1           0.707 0.0684 10.33 5.01e-25
## cont2         -0.680 0.0695 -9.79 1.29e-22
## 
## Detection (logit-scale):
##  Estimate    SE   z P(>|z|)
##      0.17 0.142 1.2   0.231
## 
## AIC: 991.3028 
## Number of sites: 75
## 
## 
##  WARNING!!!
##  The estimated scale of effect extends beyond the maximum distance specified. 
##  Consider increasing max_D in `kernel_prep` to ensure accurate estimation of scale.
plogis(opt_umf.p$opt_mod@estimates@estimates$det@estimates[[1]])
## [1] 0.5424162

Assess kernel and effects plots

plot(opt_umf.p)

plot_marginal_effects(opt_umf.p)

7.1.1 Project model

Finally, we may want to project our fitted model to the landscape to make spatial predictions. To do this, we need to scale and center our raster surfaces after applying our kernel transformations. The spatRaster object can then be used with the terra::predict function. Depending upon the spatial coverage of your sample points, you may find predicted values across a broader region unrealistic. In these instances, you may need to clamp the scaled rasters to be within a reasonable range of those observed in your sample data.

## Project model 
rast_scale.center <- kernel_scale.raster(raster_stack = rs,
                                         multiScaleR = opt_umf.p,
                                         scale_center = TRUE,
                                         clamp = TRUE,
                                         pct_mx = 0.00)
## 
## Smoothing spatRaster 1 of 2: bin1 at sigma = 51
## 
## Smoothing spatRaster 2 of 2: cont2 at sigma = 185
## Warning: Could not inspect fitted model frame; site-level covariates were not
## added.
plot(rast_scale.center)

ab.mod_pred <- terra::predict(rast_scale.center, 
                             opt_umf.p$opt_mod, 
                             type = 'state')

plot(ab.mod_pred)

7.2 Binomial Occurrence Model

Now we’ll simulate occurrence data suitable for analysis with unmarked. Preliminary experience has shown that simulation parameters can be harder to recover when using an occupancy model in unmarked with multiScaleR. We’ll use the same raster surfaces.

s_dat.occ <- sim_dat_unmarked(alpha = 0.25,
                              beta = c(-0.75,1),
                              kernel = 'gaussian',
                              sigma = c(225, 75),
                              n_points = 250,
                              n_surv = 5,
                              det = 0.5,
                              type = 'occ',
                              raster_stack = rs,
                              max_D = 800,
                              user_seed = 555)

plot(s_dat.occ$df$y ~ s_dat.occ$df$bin1)

plot(s_dat.occ$df$y ~ s_dat.occ$df$cont2)

Prepare inputs for use with multiScaleR

kernel_inputs <- kernel_prep(pts = s_dat.occ$pts,
                             raster_stack = rs,
                             max_D = 800,
                             kernel = 'gaus',
                             verbose = FALSE)

## Occupancy frame
umf <- unmarkedFrameOccu(y = s_dat.occ$y,
                         siteCovs = kernel_inputs$kernel_dat)

## Base unmarked model
(mod0_umf.occ <- unmarked::occu(~1 ~bin1 + cont2,
                                data = umf))
## 
## Call:
## unmarked::occu(formula = ~1 ~ bin1 + cont2, data = umf)
## 
## Occupancy (logit-scale):
##             Estimate    SE     z  P(>|z|)
## (Intercept)    0.546 0.159  3.44 5.76e-04
## bin1          -0.829 0.175 -4.72 2.33e-06
## cont2          0.841 0.173  4.86 1.17e-06
## 
## Detection (logit-scale):
##  Estimate     SE     z P(>|z|)
##     0.048 0.0785 0.612   0.541
## 
## AIC: 1307.227 
## Number of sites: 250
opt_umf.occ <- multiScale_optim(fitted_mod = mod0_umf.occ,
                                kernel_inputs = kernel_inputs,
                                n_cores = 8)
summary(opt_umf.occ)
## 
## Call:
## multiScale_optim(fitted_mod = mod0_umf.occ, kernel_inputs = kernel_inputs, 
##     n_cores = 8)
## 
## 
## Kernel used:
## gaussian
## 
## ***** Optimized Scale of Effect -- Sigma *****
## 
##           Mean       SE     2.5%    97.5%
## bin1  183.3404 60.39596 64.39061 302.2901
## cont2 139.1945 59.22802 22.54502 255.8440
## 
## 
##   ==================================== 
## 
## ***** Optimized Scale of Effect -- Distance *****
## 90% Kernel Weight
## 
##         Mean   2.5%  97.5%
## bin1  301.57 105.91 497.22
## cont2 228.95  37.08 420.83
## 
## 
##   ==================================== 
## 
##  *****     Fitted Model Summary     *****
## 
## 
## Call:
## unmarked::occu(formula = ~1 ~ bin1 + cont2, data = inputs$data, 
##     knownOcc = object@knownOcc)
## 
## Occupancy (logit-scale):
##             Estimate    SE     z  P(>|z|)
## (Intercept)    0.636 0.173  3.68 2.32e-04
## bin1          -0.983 0.173 -5.67 1.41e-08
## cont2          1.009 0.200  5.04 4.78e-07
## 
## Detection (logit-scale):
##  Estimate     SE     z P(>|z|)
##    0.0498 0.0783 0.637   0.524
## 
## AIC: 1293.509 
## Number of sites: 250
plogis(opt_umf.occ$opt_mod@estimates@estimates$det@estimates[[1]])
## [1] 0.5124544

We did a decent job of recovering simulated values. Note that the sample size was increased for this simulation. Previous explorations suggested that smaller sample sizes (e.g., 100) did not contain enough information to optimize the scales of effect (sigma). There is opportunity for further exploration of data needs / limitations for estimating scales of effect, especially related to more complex models such those fit with unmarked.

7.2.1 Project model

## Project model 
rast_scale.center <- kernel_scale.raster(raster_stack = rs,
                                         multiScaleR = opt_umf.occ,
                                         scale_center = TRUE,
                                         clamp = T)
## 
## Smoothing spatRaster 1 of 2: bin1 at sigma = 183
## 
## Smoothing spatRaster 2 of 2: cont2 at sigma = 139
## Warning: Could not inspect fitted model frame; site-level covariates were not
## added.
plot(rast_scale.center)

occ.mod_pred <- terra::predict(rast_scale.center, 
                             opt_umf.occ$opt_mod, 
                             type = 'state')

plot(occ.mod_pred)

8 Other model classes

multiScaleR has been developed to work with any model class that has an update function. It has been tested with models fit using several different packages. If a particular model class does not work, please let me know so I can trouble shoot and try to incorporate functionality for it. In the example below, I demonstrate the fitting of a zero-inflated model with the pscl package. Notably, the zero-inflated term is a spatial variable that is also smoothed during the analysis. This requires some manual processing to simulate data.

set.seed(12345)
rs <- sim_rast(user_seed = 999, 
               dim = 500, resolution = 30)
rs <- terra::subset(rs, c(1,3))

## Zero-inflated parameters
zi_alpha <- -0.25
zi_beta <- -1

n_points <- 400
alpha <- 0.5
beta <- 0.75
kernel <- 'gaussian'
sigma <- c(400, # For main effect, bin1
           200) # For zero-inflated, cont1
StDev <- 2  # Negative binomial dispersion

## Generate random points
bnd <- buffer(vect(ext(rs[[1]])), -1000)
pts <- spatSample(x = rs[[1]], 
                  size = n_points, 
                  method = 'random',
                  ext = ext(bnd),
                  replace = F,
                  as.points = T)

kernel_out <- kernel_prep(pts = pts,
                          raster_stack = rs,
                          max_D = 1500,
                          kernel = kernel,
                          sigma = sigma,
                          verbose = FALSE)

## ZINB simulation
zi_prob <- plogis(zi_alpha + zi_beta*kernel_out$kernel_dat$cont1)

# Simulate zero-inflated counts
# 1 = excess zero, 0 = from Poisson
is_zero <- rbinom(length(zi_prob), size = 1, prob = zi_prob)  

obs <- exp(alpha + beta*kernel_out$kernel_dat$bin1)

obs.zinb <- ifelse(is_zero, 0,
                   rnbinom(length(is_zero),
                           mu = obs,
                           size = StDev))

dat <- data.frame(cnt = obs.zinb,
                  kernel_out$kernel_dat)
with(dat, plot(cnt ~ bin1))

With data generated, we can now fit our zero-inflated model and kernel_prep and optimize with multiScaleR.

library(pscl)
## Classes and Methods for R originally developed in the
## Political Science Computational Laboratory
## Department of Political Science
## Stanford University (2002-2015),
## by and under the direction of Simon Jackman.
## hurdle and zeroinfl functions by Achim Zeileis.
zinb_mod <- pscl::zeroinfl(cnt ~ bin1 | cont1,
                           dist = 'negbin',
                           data = dat)

kernel_inputs <- kernel_prep(pts = pts,
                             kernel = kernel,
                             max_D = 1500,
                             raster_stack = rs,
                             verbose = FALSE)

NOTE: It is highly encouraged to fit your model as pscl::zeroinfl.

zinb_opt <- multiScale_optim(zinb_mod, 
                             kernel_inputs,
                             n_cores = 4)

We did a pretty good job in this simple scenario. Note the larger sample size. This was necessary to get good estimates for all parameters. Also, while cont1 was estimated relatively accurately, there is much greater uncertainty when compared to bin1.

summary(zinb_opt)
## 
## Call:
## multiScale_optim(fitted_mod = zinb_mod, kernel_inputs = kernel_inputs, 
##     n_cores = 4)
## 
## 
## Kernel used:
## gaussian
## 
## ***** Optimized Scale of Effect -- Sigma *****
## 
##           Mean       SE      2.5%    97.5%
## bin1  340.6336 45.13498 251.89872 429.3684
## cont1 180.5352 63.92097  54.86737 306.2031
## 
## 
##   ==================================== 
## 
## ***** Optimized Scale of Effect -- Distance *****
## 90% Kernel Weight
## 
##         Mean   2.5%  97.5%
## bin1  560.29 414.34 706.25
## cont1 296.95  90.25 503.66
## 
## 
##   ==================================== 
## 
##  *****     Fitted Model Summary     *****
## 
## 
## Call:
## pscl::zeroinfl(formula = cnt ~ bin1 | cont1, data = dat, dist = "negbin")
## 
## Pearson residuals:
##     Min      1Q  Median      3Q     Max 
## -1.1449 -0.6028 -0.4006  0.3115  5.7773 
## 
## Count model coefficients (negbin with log link):
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  0.51167    0.09169   5.581  2.4e-08 ***
## bin1         0.88011    0.06789  12.965  < 2e-16 ***
## Log(theta)   0.86488    0.27752   3.116  0.00183 ** 
## 
## Zero-inflation model coefficients (binomial with logit link):
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -0.3549     0.2033  -1.746   0.0809 .  
## cont1        -1.0701     0.2028  -5.276 1.32e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
## 
## Theta = 2.3747 
## Number of iterations in BFGS optimization: 12 
## Log-likelihood: -522.8 on 5 Df
plot(zinb_opt)

plot_marginal_effects(zinb_opt)

9 Other Functions & Features

Below is a brief walk through the other functions related to scales of effect that are included with the multiScaleR package.

9.1 kernel_dist

In some cases, a direct assessment of scale of effect distance is desired.

## With fitted model
kernel_dist(opt2)
##         Mean   2.5%  97.5%
## land1 392.22 284.00 500.44
## land2 821.66 666.67 976.65
## Distance of 95% kernel
kernel_dist(opt2, prob = 0.95)
##         Mean   2.5%   97.5%
## land1 467.35 338.40  596.31
## land2 979.07 794.39 1163.75

It’s also possible to calculate the distance encompassed by specified kernels without a fitted model.

kernel_dist(kernel = "gaussian", sigma = 100, prob = 0.9)
## [1] 164.49
kernel_dist(kernel = "exp", sigma = 100, prob = 0.9)
## [1] 230.26
kernel_dist(kernel = "expow", sigma = 100, beta = 5, prob = 0.9)
## [1] 90.44

9.2 plot_kernel

This is a generic function to visualize how kernel weight decays with distance

plot_kernel(kernel = 'exp',
            sigma = 50)

plot_kernel(kernel = 'expow',
            beta = 5,
            sigma = 50)

plot_kernel(kernel = 'gaussian',
            sigma = 50)

plot_kernel(kernel = 'gaussian',
            sigma = 50)

plot_kernel(kernel = 'gaussian',
            sigma = 50,
            scale_dist = F)

plot_kernel(kernel = 'gaussian',
            sigma = 50,
            add_label = F)

9.3 sim_rast

Raster surfaces can be simulated that are amenable for use the sim_dat function.

r_sim1 <- sim_rast(dim = 100,
                   resolution = 30,
                   user_seed = 555)

r_sim2 <- sim_rast(dim = 100,
                   resolution = 30,
                   autocorr_range1 = 100,
                   autocorr_range2 = 1,
                   sill = 10,
                   user_seed = 555)

plot(r_sim1)

plot(r_sim2)

9.4 sim_dat

This function will simulate data from scaled raster surfaces.

s_dat <- sim_dat(alpha = 0.25,
                 beta = c(0.75, -0.75),
                 kernel = 'gaussian',
                 sigma = c(350, 200),
                 type = 'count',
                 n_points = 100,
                 raster_stack = terra::subset(r_sim1, c(1,4)),
                 min_D = 250,
                 user_seed = 999)

Look at the simulated covariate relationships with counts

plot(s_dat$df$y ~ s_dat$df$bin1)

plot(s_dat$df$y ~ s_dat$df$cont2)

Use the simulated data with multiScaleR

sim_mod <- glm(y ~ bin1 + cont2,
               family = 'poisson',
               data = s_dat$df)

kernel_inputs <- kernel_prep(pts = s_dat$pts,
                             raster_stack = r_sim1,
                             max_D = 1500,
                             kernel = 'gaussian',
                             verbose = FALSE)
sim_opt <- multiScale_optim(fitted_mod = sim_mod,
                            kernel_inputs = kernel_inputs)

Check results

summary(sim_opt)
## 
## Call:
## multiScale_optim(fitted_mod = sim_mod, kernel_inputs = kernel_inputs, 
##     n_cores = 5)
## 
## 
## Kernel used:
## gaussian
## 
## ***** Optimized Scale of Effect -- Sigma *****
## 
##           Mean       SE      2.5%    97.5%
## bin1  361.2943 47.08809 267.83745 454.7511
## cont2 187.3294 51.30107  85.51096 289.1478
## 
## 
##   ==================================== 
## 
## ***** Optimized Scale of Effect -- Distance *****
## 90% Kernel Weight
## 
##         Mean   2.5%  97.5%
## bin1  594.28 440.55 748.00
## cont2 308.13 140.65 475.61
## 
## 
##   ==================================== 
## 
##  *****     Fitted Model Summary     *****
## 
## 
## Call:
## glm(formula = y ~ bin1 + cont2, family = "poisson", data = dat)
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  0.16758    0.11909   1.407    0.159    
## bin1         0.83988    0.12463   6.739  1.6e-11 ***
## cont2       -0.71018    0.08522  -8.333  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 387.76  on 99  degrees of freedom
## Residual deviance: 102.39  on 97  degrees of freedom
## AIC: 294.71
## 
## Number of Fisher Scoring iterations: 5
plot(sim_opt)

plot(kernel_scale.raster(raster_stack = r_sim1,
                         multiScaleR = sim_opt))
## 
## Smoothing spatRaster 1 of 2: bin1 at sigma = 361
## 
## Smoothing spatRaster 2 of 2: cont2 at sigma = 187

plot_marginal_effects(sim_opt)