Final Up to date on November 15, 2022

PyTorch is an open-source deep studying framework based mostly on Python language. It permits you to construct, prepare, and deploy deep studying fashions, providing loads of versatility and effectivity.

PyTorch is primarily targeted on tensor operations whereas a tensor generally is a quantity, matrix, or a multi-dimensional array.

On this tutorial, we’ll carry out some primary operations on one-dimensional tensors as they’re advanced mathematical objects and a necessary a part of the PyTorch library. Due to this fact, earlier than going into the element and extra superior ideas, one ought to know the fundamentals.

After going by means of this tutorial, you’ll:

- Perceive the fundamentals of one-dimensional tensor operations in PyTorch.
- Find out about tensor varieties and shapes and carry out tensor slicing and indexing operations.
- Be capable to apply some strategies on tensor objects, similar to imply, normal deviation, addition, multiplication, and extra.

Let’s get began.

**Varieties and Shapes of One-Dimensional Tensors**

First off, let’s import a number of libraries we’ll use on this tutorial.

import torch import numpy as np import pandas as pd |

If in case you have expertise in different programming languages, the best strategy to perceive a tensor is to contemplate it as a multidimensional array. Due to this fact, a one-dimensional tensor is solely a one-dimensional array, or a vector. In an effort to convert a listing of integers to tensor, apply `torch.tensor()`

constructor. As an example, we’ll take a listing of integers and convert it to varied tensor objects.

int_to_tensor = torch.tensor([10, 11, 12, 13]) print(“Tensor object sort after conversion: “, int_to_tensor.dtype) print(“Tensor object sort after conversion: “, int_to_tensor.sort()) |

Tensor object sort after conversion: torch.int64 Tensor object sort after conversion: torch.LongTensor |

Additionally, you possibly can apply the identical technique torch.tensor() to transform a float listing to a float tensor.

float_to_tensor = torch.tensor([10.0, 11.0, 12.0, 13.0]) print(“Tensor object sort after conversion: “, float_to_tensor.dtype) print(“Tensor object sort after conversion: “, float_to_tensor.sort()) |

Tensor object sort after conversion: torch.float32 Tensor object sort after conversion: torch.FloatTensor |

Be aware that components of a listing that have to be transformed right into a tensor will need to have the identical sort. Furthermore, if you wish to convert a listing to a sure tensor sort, torch additionally permits you to do this. The code traces beneath, for instance, will convert a listing of integers to a float tensor.

int_list_to_float_tensor = torch.FloatTensor([10, 11, 12, 13]) int_list_to_float_tensor.sort() print(“Tensor sort after conversion: “, int_list_to_float_tensor.sort()) |

Tensor sort after conversion: torch.FloatTensor |

Equally, `measurement()`

and `ndimension()`

strategies permit you to discover the scale and dimensions of a tensor object.

print(“Dimension of the int_list_to_float_tensor: “, int_list_to_float_tensor.measurement()) print(“Dimensions of the int_list_to_float_tensor: “,int_list_to_float_tensor.ndimension()) |

Dimension of the int_list_to_float_tensor: torch.Dimension([4]) Dimensions of the int_list_to_float_tensor: 1 |

For reshaping a tensor object, `view()`

technique might be utilized. It takes `rows`

and `columns`

as arguments. For example, let’s use this technique to reshape `int_list_to_float_tensor`

.

reshaped_tensor = int_list_to_float_tensor.view(4, 1) print(“Unique Dimension of the tensor: “, reshaped_tensor) print(“New measurement of the tensor: “, reshaped_tensor) |

Unique Dimension of the tensor: tensor([[10.], [11.], [12.], [13.]]) New measurement of the tensor: tensor([[10.], [11.], [12.], [13.]]) |

As you possibly can see, the `view()`

technique has modified the scale of the tensor to `torch.Dimension([4, 1])`

, with 4 rows and 1 column.

Whereas the variety of components in a tensor object ought to stay fixed after `view()`

technique is utilized, you should use `-1`

(similar to `reshaped_tensor`

) to reshape a dynamic-sized tensor.**.**view(-1, 1)

**Changing Numpy Arrays to Tensors**

Pytorch additionally permits you to convert NumPy arrays to tensors. You need to use `torch.from_numpy`

for this operation. Let’s take a NumPy array and apply the operation.

numpy_arr = np.array([10.0, 11.0, 12.0, 13.0]) from_numpy_to_tensor = torch.from_numpy(numpy_arr)
print(“dtype of the tensor: “, from_numpy_to_tensor.dtype) print(“sort of the tensor: “, from_numpy_to_tensor.sort()) |

dtype of the tensor: torch.float64 sort of the tensor: torch.DoubleTensor |

Equally, you possibly can convert the tensor object again to a NumPy array. Let’s use the earlier instance to indicate the way it’s carried out.

tensor_to_numpy = from_numpy_to_tensor.numpy() print(“again to numpy from tensor: “, tensor_to_numpy) print(“dtype of transformed numpy array: “, tensor_to_numpy.dtype) |

again to numpy from tensor: [10. 11. 12. 13.] dtype of transformed numpy array: float64 |

**Changing Pandas Sequence to Tensors**

It’s also possible to convert a pandas sequence to a tensor. For this, first you’ll have to retailer the pandas sequence with `values()`

operate utilizing a NumPy array.

pandas_series=pd.Sequence([1, 0.2, 3, 13.1]) store_with_numpy=torch.from_numpy(pandas_series.values) print(“Saved tensor in numpy array: “, store_with_numpy) print(“dtype of saved tensor: “, store_with_numpy.dtype) print(“sort of saved tensor: “, store_with_numpy.sort()) |

Saved tensor in numpy array: tensor([ 1.0000, 0.2000, 3.0000, 13.1000], dtype=torch.float64) dtype of saved tensor: torch.float64 sort of saved tensor: torch.DoubleTensor |

Moreover, the Pytorch framework permits us to do loads with tensors similar to its `merchandise()`

technique returns a python quantity from a tensor and `tolist()`

technique returns a listing.

new_tensor=torch.tensor([10, 11, 12, 13]) print(“the second merchandise is”,new_tensor[1].merchandise()) tensor_to_list=new_tensor.tolist() print(‘tensor:’, new_tensor,“nlist:”,tensor_to_list) |

the second merchandise is 11 tensor: tensor([10, 11, 12, 13]) listing: [10, 11, 12, 13] |

**Indexing and Slicing in One-Dimensional Tensors**

Indexing and slicing operations are nearly the identical in Pytorch as python. Due to this fact, the primary index at all times begins at 0 and the final index is lower than the full size of the tensor. Use sq. brackets to entry any quantity in a tensor.

tensor_index = torch.tensor([0, 1, 2, 3]) print(“Test worth at index 0:”,tensor_index[0]) print(“Test worth at index 3:”,tensor_index[3]) |

Test worth at index 0: tensor(0) Test worth at index 3: tensor(3) |

Like a listing in python, it’s also possible to carry out slicing operations on the values in a tensor. Furthermore, the Pytorch library permits you to change sure values in a tensor as nicely.

Let’s take an instance to examine how these operations might be utilized.

example_tensor = torch.tensor([50, 11, 22, 33, 44]) sclicing_tensor = example_tensor[1:4] print(“instance tensor : “, example_tensor) print(“subset of instance tensor:”, sclicing_tensor) |

instance tensor : tensor([50, 11, 22, 33, 44]) subset of instance tensor: tensor([11, 22, 33]) |

Now, let’s change the worth at index 3 of `example_tensor`

:

print(“worth at index 3 of instance tensor:”, example_tensor[3]) example_tensor[3] = 0 print(“new tensor:”, example_tensor) |

worth at index 3 of instance tensor: tensor(0) new tensor: tensor([50, 11, 22, 0, 44]) |

**Some Capabilities to Apply on One-Dimensional Tensors**

On this part, we’ll evaluate some statistical strategies that may be utilized on tensor objects.

**Min and Max Capabilities**

These two helpful strategies are employed to seek out the minimal and most worth in a tensor. Right here is how they work.

We’ll use a `sample_tensor`

for example to use these strategies.

sample_tensor = torch.tensor([5, 4, 3, 2, 1]) min_value = sample_tensor.min() max_value = sample_tensor.max() print(“examine minimal worth within the tensor: “, min_value) print(“examine most worth within the tensor: “, max_value) |

examine minimal worth within the tensor: tensor(1) examine most worth within the tensor: tensor(5) |

**Imply and Customary Deviation**

Imply and normal deviation are sometimes used whereas doing statistical operations on tensors. You may apply these two metrics utilizing `.imply()`

and `.std()`

features in Pytorch.

Let’s use an instance to see how these two metrics are calculated.

mean_std_tensor = torch.tensor([–1.0, 2.0, 1, –2]) Imply = mean_std_tensor.imply() print(“imply of mean_std_tensor: “, Imply) std_dev = mean_std_tensor.std() print(“normal deviation of mean_std_tensor: “, std_dev) |

imply of mean_std_tensor: tensor(0.) normal deviation of mean_std_tensor: tensor(1.8257) |

**Easy Addition and Multiplication Operations on One-Dimensional Tensors**

Addition and Multiplication operations might be simply utilized on tensors in Pytorch. On this part, we’ll create two one-dimensional tensors to show how these operations can be utilized.

a = torch.tensor([1, 1]) b = torch.tensor([2, 2])
add = a + b multiply = a * b
print(“addition of two tensors: “, add) print(“multiplication of two tensors: “, multiply) |

addition of two tensors: tensor([3, 3]) multiplication of two tensors: tensor([2, 2]) |

In your comfort, beneath is all of the examples above tying collectively so you possibly can attempt them in a single shot:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
import torch import numpy as np import pandas as pd
int_to_tensor = torch.tensor([10, 11, 12, 13]) print(“Tensor object sort after conversion: “, int_to_tensor.dtype) print(“Tensor object sort after conversion: “, int_to_tensor.sort())
float_to_tensor = torch.tensor([10.0, 11.0, 12.0, 13.0]) print(“Tensor object sort after conversion: “, float_to_tensor.dtype) print(“Tensor object sort after conversion: “, float_to_tensor.sort())
int_list_to_float_tensor = torch.FloatTensor([10, 11, 12, 13]) int_list_to_float_tensor.sort() print(“Tensor sort after conversion: “, int_list_to_float_tensor.sort())
print(“Dimension of the int_list_to_float_tensor: “, int_list_to_float_tensor.measurement()) print(“Dimensions of the int_list_to_float_tensor: “,int_list_to_float_tensor.ndimension())
reshaped_tensor = int_list_to_float_tensor.view(4, 1) print(“Unique Dimension of the tensor: “, reshaped_tensor) print(“New measurement of the tensor: “, reshaped_tensor)
numpy_arr = np.array([10.0, 11.0, 12.0, 13.0]) from_numpy_to_tensor = torch.from_numpy(numpy_arr) print(“dtype of the tensor: “, from_numpy_to_tensor.dtype) print(“sort of the tensor: “, from_numpy_to_tensor.sort())
tensor_to_numpy = from_numpy_to_tensor.numpy() print(“again to numpy from tensor: “, tensor_to_numpy) print(“dtype of transformed numpy array: “, tensor_to_numpy.dtype)
pandas_series=pd.Sequence([1, 0.2, 3, 13.1]) store_with_numpy=torch.from_numpy(pandas_series.values) print(“Saved tensor in numpy array: “, store_with_numpy) print(“dtype of saved tensor: “, store_with_numpy.dtype) print(“sort of saved tensor: “, store_with_numpy.sort())
new_tensor=torch.tensor([10, 11, 12, 13]) print(“the second merchandise is”,new_tensor[1].merchandise()) tensor_to_list=new_tensor.tolist() print(‘tensor:’, new_tensor,“nlist:”,tensor_to_list)
tensor_index = torch.tensor([0, 1, 2, 3]) print(“Test worth at index 0:”,tensor_index[0]) print(“Test worth at index 3:”,tensor_index[3])
example_tensor = torch.tensor([50, 11, 22, 33, 44]) sclicing_tensor = example_tensor[1:4] print(“instance tensor : “, example_tensor) print(“subset of instance tensor:”, sclicing_tensor)
print(“worth at index 3 of instance tensor:”, example_tensor[3]) example_tensor[3] = 0 print(“new tensor:”, example_tensor)
sample_tensor = torch.tensor([5, 4, 3, 2, 1]) min_value = sample_tensor.min() max_value = sample_tensor.max() print(“examine minimal worth within the tensor: “, min_value) print(“examine most worth within the tensor: “, max_value)
mean_std_tensor = torch.tensor([–1.0, 2.0, 1, –2]) Imply = mean_std_tensor.imply() print(“imply of mean_std_tensor: “, Imply) std_dev = mean_std_tensor.std() print(“normal deviation of mean_std_tensor: “, std_dev)
a = torch.tensor([1, 1]) b = torch.tensor([2, 2]) add = a + b multiply = a * b print(“addition of two tensors: “, add) print(“multiplication of two tensors: “, multiply) |

## Additional Studying

Developed concurrently TensorFlow, PyTorch used to have a less complicated syntax till TensorFlow adopted Keras in its 2.x model. To be taught the fundamentals of PyTorch, chances are you’ll wish to learn the PyTorch tutorials:

Particularly the fundamentals of PyTorch tensor might be discovered within the Tensor tutorial web page:

There are additionally fairly a number of books on PyTorch which are appropriate for inexperienced persons. A extra lately revealed guide needs to be advisable because the instruments and syntax are actively evolving. One instance is

**Abstract**

On this tutorial, you’ve found easy methods to use one-dimensional tensors in Pytorch.

Particularly, you realized:

- The fundamentals of one-dimensional tensor operations in PyTorch
- About tensor varieties and shapes and easy methods to carry out tensor slicing and indexing operations
- Easy methods to apply some strategies on tensor objects, similar to imply, normal deviation, addition, and multiplication