In this article we describe the indexing operator for torch tensors and how it compares to the R indexing operator for arrays.
Torch’s indexing semantics are closer to numpy’s semantics than R’s. You will find a lot of similarities between this article and the
numpy indexing article available here.
Single element indexing for a 1-D tensors works mostly as expected. Like R, it is 1-based. Unlike R though, it accepts negative indices for indexing from the end of the array. (In R, negative indices are used to remove elements.)
You can also subset matrices and higher dimensions arrays using the same syntax:
Note that if one indexes a multidimensional tensor with fewer indices than dimensions, one gets an error, unlike in R that would flatten the array. For example:
It is possible to slice and stride arrays to extract sub-arrays of the same number of dimensions, but of different sizes than the original. This is best illustrated by a few examples:
You can also use the
1:10:2 syntax which means: In the range from 1 to 10, take every second item. For example:
Another special syntax is the
N, meaning the size of the specified dimension.
Like in R, you can take all elements in a dimension by leaving an index empty.
Consider a matrix:
The following syntax will give you the first row:
And this would give you the first 2 columns:
By default, when indexing by a single integer, this dimension will be dropped to avoid the singleton dimension:
You can optionally use the
drop = FALSE argument to avoid dropping the dimension.
It’s possible to add a new dimension to a tensor using index-like syntax:
You can also use
NULL instead of