Details on fitting¶
Here we describe how the polyclonal
package actually fits the models.
The basic idea is to minimize the difference between the predicted and measured variant-level escape probabilities, \(p_v\left(c\right)\), using a loss function that is not overly sensitive to outliers. In addition, there is regularization to encourage parameters to behave under biologically motivated constraints.
Implementation¶
The fitting is implemented in the Polyclonal.fit
function, which allows adjustment of the weights for the regularization. By default the optimization uses the gradient-based L-BFGS-B method implemented in scipy.optimize.minimize and simply continues optimization until the minimization converges.
Some key details about the fitting are as follows:
By default, the fitting first fits a “site-level” model in which all mutations are lumped together so there are just two characters (wildtype and mutant). The escape values from this initial site-level fitting are then used to initialize the full mutation-level escape values which are then further optimized. The idea is that first fitting a simpler model with less parameters helps get the parameters into a “reasonable” space before full model optimization. This option is implemented via the
fit_site_level_first
parameter toPolyclonal.fit
, and it is recommended to use this approach as testing indicates it helps.By default, if you are using free-parameter Hill coefficients or non-neutralized fractions, a model with those fixed is fit first via
fit_fixed_first
. It is fit with stronger regulation on the activities (viafit_fixed_first_reg_activity_weight
) to keep epitopes from dropping too low in activity to be picked up in subsequent all-parameter optimization. When this model is being used, the site model is fit with this fixed model, not the later full model.By default, if there are multiple variants with the same mutations, they are by default treated as independent measurements that are fit. This can be changed to “collapse” them to a single variant that is then given a weight proportional to the number of constituent variants that is used when calculating the loss. This option is implemented by default via
collapse_identical_variants
during the initialization of aPolyclonal
object. It speeds up fitting without (usually) substantially changing the fitting results. However, do not collaps if you are using bootstrapping.
The Polyclonal.fit
also allows you to adjust the weights on the regularizations. The default should be sensible, but you may want to try adjusting them about. You can also adjust the \(\delta\) values for the Pseudo-Huber loss / regularization terms (see below), although the defaults are probably pretty good for these as they are chosen to be L1-like on most the range that values are expected to span.
Loss function¶
We use a scaled Pseudo-Huber loss function on the difference between the predicted and measure escape probabilities. Note that the Pseudo-Huber function is defined as \(\hat{h}_{\delta}\left(x\right) = \delta^2 \left(\sqrt{1 + \left(x/\delta\right)^2} - 1\right)\) where \(\delta\) is a parameter that indicates when the loss transitions from being quadratic (L2-like) to linear (L1-like) in \(a\). Note that we will actually use a scaled Pseudo-Huber function of \(h_{\delta}\left(x\right) = \hat{h}_{\delta}\left(x\right)/\delta\) so the slope of the loss is one in the linear range. The rationale for a Pseudo-Huber loss is to be robust to outliers (L1-like for large residuals).
Specifically, let \(r_v\left(c\right) = p_v\left(c\right) - y_{v,c}\) be the residual for the predicted of the escape probability of variant \(v\) at concentration \(c\), where we are using \(y_{v,c}\) to denote the measured value. Then the loss for variant \(v\) at concentration \(c\) is \(L_{\delta_{\rm{loss}}}\left(r_v\left(c\right)\right) = h_{\delta_{\rm{loss}}}\left(r_v\left(c\right)\right)\), and the overall loss is:
Regularization¶
We also regularize the mutation escape values (\(\beta_{m,e}\)) and the epitope activities (\(a_{\rm{wt}, e}\)) based on the notions:
Most mutations should not mediate escape,
When a site is involved in escape for a given epitope, most mutations at a site will have similar-ish effects.
Epitopes should be mostly unique: a site involved in escape should usually only mediate escape from a single epitope.
Epitopes should be relatively spatially compact (requires structural information).
Epitope activities should be small (or negative) except when clear evidence to the contrary.
Regularization of escape values¶
We regularize the escape values \(\beta_{m,e}\) using a simple Pseudo-Huber function, so that
where \(\lambda_{\rm{escape}}\) is the strength of the regularization and \(\delta_{\rm{escape}}\) is the Psdeuo-Huber delta parameter.
Regularization of spread of escape values at each site and epitope¶
We regularize the variance of the escape values at each site, so that
where \(i\) ranges over all sites, \(M_i\) is the number of mutations at site \(i\), and \(m \in i\) indicates all mutations at site \(i\).
Regularization of spatial spread of epitopes¶
To regularize the spatial spread of epitopes, we first define a differentiable measure of the average absolute value of escape at a site for an epitope \(e\) as
where \(\epsilon\) is a small number and \(m\) ranges over all mutations at site \(r\).
We then further assume that we have an experimental measure of \(d_{r,r'}\) of the distance between each pair of residues \(r\) and \(r'\).
The regularization term is then:
Note how this term has weights enabling regularization on both the distances and squared distances. The factor of \(\frac{1}{2}\) is to avoid double counting pairs, noting that the diagonal elements are always zero since the self distances are zero.
Regularization of epitope uniqueness¶
To regularize to ensure epitopes contain largely unique sites, we define the following term which uses the differentiable average absolute value of escape at a site for an epitope \(s_{r,e}\) defined above:
where \(e\) and \(e'\) range over all pairs of epitopes, and \(r\) ranges over all sites.
Regularization of epitope uniqueness\(^2\)¶
This is a second regularization to ensure epitopes contain largely unique sites, but this one operates on the squared product of escape at a site, and so will more strongly penalize very large escape at same site, but less penalize weak shared constraint:
where \(i\), \(j\) range over all epitopes \(E\), \(k\) is the site index, and \(m' \in S_{k}\) represents all mutations at site \(k\).
Regularization of epitope activities¶
We regularize the epitope activities \(a_{\rm{wt}, e}\) to be close to zero using Pseudo-Huber function. Note that the activities are confounded with the concentration scale, so for this regularization we first compute the geometric mean concentration \(c_{GM}\) and incorporate that so the strength of the regularization is about the same regardless of the concentrations. Specifically, the regularization is:
where \(\lambda_{\rm{activity}}\) is the strength of the regularization and \(\delta_{\rm{activity}}\) is the Psdeuo-Huber delta parameter.
Regularization of Hill coefficients¶
We regularize the Hill coefficients \(n_e\) to be close to one using the regularization, using a penalty that essentially operates on the log of \(n_e\) so the values are bounded by 0 and infinity:
where \(\lambda_{\rm{Hill coefficient}}\) is the strength of the regularization. Note that it is important that the Hill coefficients be fairly strongly normalized. For instance, a value of \(\lambda_{\rm{Hill coefficient}} = 50\) creates a penalty of 200 when \(n_e = 1/3\) or \(n_e = 3\), a penalty of 50 when \(n_e = 1/2\) or \(n_e = 2\), and a penalty of 12.5 when \(n_e = 1.5\) or \(n_e = 1 / 1.5\). Note also that the minimization bounds \(0 < n_e\).
Regularization of non-neutralizable fraction¶
We regularize the non-neutralizable fraction \(t_e\) to be close to zero, and also during the minimization set a bound so it cannot go below zero or above one. The regularization is:
where \(\lambda_{\rm{non-neutralized-frac}}\) is the strength of the regularization. Note that the regularization should be fairly strong. For instance, value of \(\lambda_{\rm{non-neutralized-frac}} = 1000\) creates a penalty of 10 when \(t_e = 0.1\) and a penalty of 62.5 when \(t_e = 0.25\). Note also that the minimization bounds \(0.5 \ge t_e \ge 0\).
Gradients used for optimization¶
Here are the formulas used to calculate the gradients in the optimization.
Gradient of loss function¶
For the loss function, the gradients are as follows:
See below for how the sub-components that lead to these were calculated.
Calculating \(\frac{\partial \left[h_{\delta}\left(r\right)\right]}{\partial r}\)¶
We have
Calculating \(\frac{\partial p_v\left(c\right)}{\partial \beta_{m,e}}\)¶
First, note
Next, note
where the last step uses the simplification here.
Finally, note
Putting it all together, we have:
Calculating \(\frac{\partial p_v\left(c\right)}{\partial a_{\rm{wt},e}}\)¶
The only difference from above is the sign, so:
Calculating \(\frac{\partial U_e\left(v, c\right)}{\partial n_e}\)¶
Calculating \(\frac{\partial U_e\left(v, c\right)}{\partial t_e}\)¶
Gradients of regularizations¶
Calculating \(\frac{\partial R_{\rm{escape}}}{\partial \beta_{m,e}}\)¶
Calculating \(\frac{\partial R_{\rm{spread}}}{\partial \beta_{m,e}}\)¶
Calculating \(\frac{\partial s_{r,e}}{\partial \beta_{m,e}}\)¶
Calculating \(\frac{\partial R_{\rm{spatial}}}{\partial \beta_{m,e}}\)¶
where \(r_m\) is the site of mutation \(m\).
Calculating \(\frac{\partial R_{\rm{uniqueness}}}{\partial \beta_{m,e}}\)¶
Calculating \(\frac{\partial R_{\rm{uniqueness}^2}}{\partial \beta_{m,e}}\)¶
Calculating \(\frac{\partial R_{\rm{activity}}}{\partial a_{\rm{wt}, e}}\)¶
Calculating \(\frac{\partial R_{\rm{Hill coefficient}}}{\partial n_e}\)¶
Calculating \(\frac{\partial R_{\rm{non-neutralized-frac}}}{\partial t_e}\)¶
Bootstrapping¶
For the bootstrapping implemented by PolyclonalBootstrap
, we start with a single pre-fit model to all the data. We then draw bootstrap replicates of the data used to fit that model, by default sampling the same variants at each concentration (see sample_by
option of PolyclonalBootstrap
). We then fit each of these bootstrapped models starting from the initial values from the pre-fit model on all of the data. Finally, the fit parameters or predictions from the models are summarized.
Note that mutations may not be present in some bootstrap replicates if they are only in a few variants, and this can be assessed by looking at the frac_boostrap_replicates
column in the output from PolyclonalBootstrap
.
[ ]: