We demonstrate how to apply the different thinning functions available in seqgendiff. The following contains guidelines for the workflow of a single repetition in a simulation study.
The methods used here are described in Gerard (2020).
We will use simulated data for this vignette. Though in practice you would obtain the RNA-seq counts from a real dataset.
Each repetition of a simulation study, you should randomly subset your RNA-seq data so that your results are not dependent on the quirks of a few individuals/genes. The function to do this is
The default is to randomly select the samples and genes. Though there are many options available on how to select the genes using the
If the row and column names of the original matrix are
NULL, then the row and column names of the returned submatrix contain the indices of the selected rows and columns of the original matrix.
If you are exploring the effects of heterogeneous library sizes, use
thin_lib(). You specify the thinning factor on the log-2 scale, so a value of 0 means no thinning, a value of 1 means thin by half, a value of 2 means thin by 1/4, etc. You need to specify this scaling factor for all samples.
We can verify that thinning was performed correctly by looking at the empirical thinning amount.
A similar function exists to thin gene-wise rather than sample-wise:
To uniformly thin all counts, use
thin_all(). This might be useful for determining read-depth suggestions. It takes as input a single universal scaling factor on the log-2 scale.
We can verify that we approximately halved all counts:
thin_diff() for general thinning. For this function, you need to specify both the coefficient matrix and the design matrix.
designmat <- cbind(rep(c(0, 1), each = ncol(submat) / 2), rep(c(0, 1), length.out = ncol(submat))) designmat #> [,1] [,2] #> [1,] 0 0 #> [2,] 0 1 #> [3,] 0 0 #> [4,] 1 1 #> [5,] 1 0 #> [6,] 1 1 coefmat <- matrix(stats::rnorm(ncol(designmat) * nrow(submat)), ncol = ncol(designmat), nrow = nrow(submat)) head(coefmat) #> [,1] [,2] #> [1,] 0.2554242 -1.3119342 #> [2,] 0.6377614 0.5336341 #> [3,] 1.3052848 0.6763963 #> [4,] 0.1383671 -0.9041101 #> [5,] -0.1251100 1.3601036 #> [6,] -0.8867430 -0.5249070
Once we have the coefficient and design matrices, we can thin.
We can verify that we thinned correctly using the voom-limma pipeline.
We’ll plot the true coefficients against their estimates.
The difference between the
design_perm arguments is that the rows in
design_perm are permuted before applying thinning. Without any other arguments, this makes the design variables independent of any surrogate variables. With the additional specification of the
target_cor argument, we try to control the amount of correlation between the design variables in
design_perm and any surrogate variables.
Let’s target for a correlation of 0.9 between the first surrogate variable and the first design variable, and a correlation of 0 between the first surrogate variable and the second design variable.
target_cor <- matrix(c(0.9, 0), nrow = 2) target_cor #> [,1] #> [1,] 0.9 #> [2,] 0.0 thout_cor <- thin_diff(mat = submat, design_perm = designmat, coef_perm = coefmat, target_cor = target_cor) #> Note that optmatch uses a strange non-standard license: #> https://cran.r-project.org/package=optmatch/LICENSE #> #> If this doesn't work for you, try permute_method = "hungarian" #> #> This message is displayed once per R session.
The first variable is indeed more strongly correlated with the estimated surrogate variable:
The actual correlation between the permuted design matrix and the surrogate variables will not be the target correlation. But we can estimate what the actual correlation is using the function
I am only using 50 iterations here for speed reasons, but you should stick to the defaults for
For the special case when your design matrix is just a group indicator (that is, you have two groups of individuals), you can use the function
thin_2group(). Let’s generate data from the two-group model where 90% of genes are null and the non-null effects are gamma-distributed.
We can again verify that we thinned appropriately using the voom-limma pipeline:
And we can plot the results