In this section we will talk about some PyTorch functions that operates the tensors.
torch.Tensor.expand
Signature: Tensor.expand(*sizes) -> Tensor
The expand function returns a new view of the self tensor, with singleton dimensions expanded to a larger size. The passing parameter indicates the destination size. (“singleton dimensions” means the dimension with shape 1)
Basic Usage
Passing -1 as the size for a dimension means not changing the size of that dimension.
1  | x = torch.tensor([[1], [2], [3]]) # torch.Size([3, 1])  | 
1  | # OUTPUT  | 
Wrong Usage
Only the dimension with shape 1 can be expanded:
1  | x = torch.tensor([[1], [2], [3]]) # torch.Size([3, 1])  | 
Why use it?
The return is only a view, not a new tensor. Therefore, if you only want to only read (not write) to an expanded tensor, use expand() will save much GPU memory. Note that modifying on the expanded tensor would make modification on the original as well.
1  | x = torch.tensor([[1], [2], [3]]) # torch.Size([3, 1])  | 
1  | # OUTPUT  | 
torch.Tensor.repeat
Signature: Tensor.repeat(*sizes) -> Tensor)
Repeats this tensor along the specified dimensions. It is somewhat similar to torch.Tensor.expand(), but the passing in parameter indicates the repeat times. Also, this is a deep copy.
1  | x = torch.tensor([1, 2, 3]) # torch.Size([3])  | 
1  | # OUTPUT  | 
More than the given ndimension
If the size has more dimension than the self tensor, like the example below, the x only have shape 3x1, while we have more than two input parameters, then additional dimensions will be added at the front. 
1  | x = torch.tensor([[1], [2], [3]]) # torch.Size([3, 1])  | 
torch.Tensor.transpose
Signature: torch.transpose(input, dim0, dim1) -> Tensor
Signature: torch.Tensor.transpose(dim0, dim1) -> Tensor
Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped. 
Therefore, like the examples below, x.transpose(0, 1) and x.transpose(1, 0) are same. 
1  | x = torch.randn(2, 3)  | 
1  | y = torch.randn(2, 3, 4)  | 
torch.Tensor.permute
Signature: torch.Tensor.permute(dims) -> Tensor
Signature: torch.permute(input, dims) -> Tensor
This function reorder the dimensions. See the example below.
1  | y = torch.randn(2, 3, 4) # Shape: torch.Size([2, 3, 4])  | 
Let’s have a close look to the third line as an example.
The first argument
0means that the new tensor’s first dimension is the original dimension at0, so the shape is 2.The second argument
2means that the new tensor’s second dimension is the original dimension at2, so the shape is 4.The third argument
1means that the new tensor’s third dimension is the original dimension at1, so the shape is 3.
Finally, the result shape is torch.Size([2, 4, 3]). 
torch.Tensor.view / torch.Tensor.reshape
Signature: Tensor.view(*shape) -> Tensor
Signature: Tensor.reshape(*shape) -> Tensor
Reshape the Tensor to shape. 
The function shape() always return a new copy of the tensor. 
For function view(), if the shape satisfies some conditions (see here), deep copy can be avoided to save the GPU memory. 
1  | x = torch.randn(4, 3)  | 
1  | x = torch.randn(4, 3)  | 
torch.cat
Signature: torch.cat(tensors, dim=0, out=None) -> Tensor
Concatenates the given sequence of tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty. For how to determine the dim, please refer to my previous article. 
1  | x = torch.randn(2, 3)  | 
torch.stack
Signature: torch.stack(tensors, dim=0, out=None) -> Tensor
Concatenates a sequence of tensors along a new dimension. See example below.
1  | x = torch.randn(2, 3) # Shape: torch.Size([2, 3])  | 
torch.vstack/hstack
torch.vsplit(...) is spliting the tensors vertically, which is equivalent to torch.split(..., dim=0). 
 torch.hsplit(...) is spliting the tensors horizontally, which is equivalent to torch.split(..., dim=1). 
1  | x = torch.randn(2, 3) # Shape: torch.Size([2, 3])  | 
torch.split
Signature: torch.split(tensor, split_size_or_sections, dim=0)
- If 
split_size_or_sectionsis an integer, then tensor will be split into equally sized chunks (if possible, ptherwise, last would be smaller). 
1  | x = torch.randn(4, 3) # Shape: torch.Size([4, 3])  | 
- If 
split_size_or_sectionsis a list, then tensor will be split intolen(split_size_or_sections)chunks with sizes indimaccording tosplit_size_or_sections. 
1  | x = torch.randn(4, 3) # Shape: torch.Size([4, 3])  | 
torch.vsplit/hsplit
This is actually similar to torch.vstack and torch.hstack. v means vertically, along dim=0, and h means horizontally, along dim=1.
1  | # The followings are equivalent:  | 
torch.flatten
Signature: torch.flatten(input, start_dim=0, end_dim=-1) -> Tensor
flatten the given dimension from start_dim to end_dim. This is especially useful when converting a 3D (image) tensor to a linear vector.
1  | x = torch.randn(2, 4, 4)  |