Skip to content
geeksforgeeks
  • Courses
    • DSA to Development
    • Get IBM Certification
    • Newly Launched!
      • Master Django Framework
      • Become AWS Certified
    • For Working Professionals
      • Interview 101: DSA & System Design
      • Data Science Training Program
      • JAVA Backend Development (Live)
      • DevOps Engineering (LIVE)
      • Data Structures & Algorithms in Python
    • For Students
      • Placement Preparation Course
      • Data Science (Live)
      • Data Structure & Algorithm-Self Paced (C++/JAVA)
      • Master Competitive Programming (Live)
      • Full Stack Development with React & Node JS (Live)
    • Full Stack Development
    • Data Science Program
    • All Courses
  • Tutorials
    • Data Structures & Algorithms
    • ML & Data Science
    • Interview Corner
    • Programming Languages
    • Web Development
    • CS Subjects
    • DevOps And Linux
    • School Learning
  • Practice
    • Build your AI Agent
    • GfG 160
    • Problem of the Day
    • Practice Coding Problems
    • GfG SDE Sheet
  • Contests
    • Accenture Hackathon (Ending Soon!)
    • GfG Weekly [Rated Contest]
    • Job-A-Thon Hiring Challenge
    • All Contests and Events
  • Python Tutorial
  • Interview Questions
  • Python Quiz
  • Python Glossary
  • Python Projects
  • Practice Python
  • Data Science With Python
  • Python Web Dev
  • DSA with Python
  • Python OOPs
Open In App
Next Article:
Two-Dimensional Tensors in Pytorch
Next article icon

Tensors in Pytorch

Last Updated : 04 Jul, 2021
Comments
Improve
Suggest changes
Like Article
Like
Report

A Pytorch Tensor is basically the same as a NumPy array. This means it does not know anything about deep learning or computational graphs or gradients and is just a generic n-dimensional array to be used for arbitrary numeric computation. However, the biggest difference between a NumPy array and a PyTorch Tensor is that a PyTorch Tensor can run on either CPU or GPU. To run operations on the GPU, just cast the Tensor to a cuda datatype using:

device = torch.device(“cpu”)

# to create random input and output data ,

# and H is hidden dimension; D_out is output dimension.

N, D_in, H, D_out = 32, 100, 10, 2 

x = torch.randn(N, D_in, device=device, dtype=torch.float)    #where x is a tensor 

In the above example, x can be thought of as a random feature tensor as an input to a model. We will see how to create tensors, different attributes, and operations on a tensor in this article.

How to create a Tensor?

You can create a tensor using some simple lines of code as shown below.

Python3

import torch
V_data = [1, 2, 3, 4, 5]
V = torch.tensor(V_data)
print(V)
                      
                       

Output: 

tensor([1, 2, 3, 4, 5])

You can also create a tensor of random data with a given dimensionality like:

Python3

import torch
  
x = torch.randn((3, 4, 5))
print(x)
                      
                       

Output :

tensor([[[ 0.8332, -0.2102,  0.0213,  0.4375, -0.9506],           [ 0.0877, -1.5845, -0.1520,  0.3944, -0.7282],           [-0.6923,  0.0332, -0.4628, -0.9127, -1.4349],           [-0.3641, -0.5880, -0.5963, -1.4126,  0.5308]],            [[ 0.4492, -1.2030,  2.5985,  0.8966,  0.4876],           [ 0.5083,  1.4515,  0.6496,  0.3407,  0.0093],           [ 0.1237,  0.3783, -0.7969,  1.4019,  0.0633],           [ 0.4399,  0.3827,  1.2231, -0.0674, -1.0158]],            [[-0.2490, -0.5475,  0.6201, -2.2092,  0.8405],           [ 0.1684, -1.0118,  0.7414, -3.3518, -0.3209],           [ 0.6543,  0.1956, -0.2954,  0.1055,  1.6523],           [-0.9872, -2.0118, -1.6609,  1.4072,  0.0632]]])

You can also create tensors using the following functions:

  • torch.zeros(): Creates a new Tensor with all elements, initialized as zeros.

Python3

import torch
  
z= torch.zeros([3,3], dtype=torch.int32)
print(z)
                      
                       

Output:  

tensor([[0, 0, 0],          [0, 0, 0],          [0, 0, 0]], dtype=torch.int32)
  • torch.ones():  Creates a new Tensor with all elements, initialized as ones.

Python3

import torch
  
z = torch.ones([3,3])
print(z)
                      
                       

Output:

tensor([[1., 1., 1.],          [1., 1., 1.],          [1., 1., 1.]])
  • torch.full() and torch.full_like():  These functions return a Tensor of the required size filled with required fill_value provided. The complete prototype for torch.full() is:

Syntax: torch.full(size, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)

And torch.full_like() is:

Syntax: torch.full_like(input, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format)

Python3

import torch
  
# example of torch.full()
newTensor= torch.full((4, 3), 3.14,dtype= torch.float32)
print(newTensor)
                      
                       

Output:

tensor([[3.1400, 3.1400, 3.1400],          [3.1400, 3.1400, 3.1400],          [3.1400, 3.1400, 3.1400],          [3.1400, 3.1400, 3.1400]])

Python3

import torch
  
# Example for torch.full_like()
x = torch.full_like(newTensor,3.24, dtype=None )
print(x)
                      
                       

Output: 

tensor([[3.2400, 3.2400, 3.2400],          [3.2400, 3.2400, 3.2400],          [3.2400, 3.2400, 3.2400],          [3.2400, 3.2400, 3.2400]])

Here a new Tensor is returned, with the same size and dtype as the newTensor which was earlier returned from the torch.full method in the example shown above.

Tensor attributes:

Each tensor( torch.Tensor ) has a torch.dtype, torch.device, and torch.layout attributes. 

  • torch.dtype: A torch.dtype is an object that represents the data type of torch.Tensor. PyTorch has twelve different data types.
  • torch.device: A torch.device is an object representing the device on which a torch.Tensor is or will be allocated. The torch.device contains a device type (‘cpu’ or ‘cuda’) and an optional device ordinal for the device type.

Example:

Python3

torch.device('cuda:0')
                      
                       

Output : 

device(type='cuda', index=0)

If the device ordinal is not present, this object will always represent the current device for the device type, even after torch.cuda.set_device() is called.

  • torch.layout: A torch.layout is an object that represents the memory layout of a torch.Tensor. Currently, the torch supports two types of memory layout.

1. torch.strided: Represents dense Tensors and is the memory layout that is most commonly used. Each stridden tensor has an associated torch.Storage, which holds its data. These tensors provide a multi-dimensional, stridden view of storage. The stride of an array (also referred to as increment, pitch or step size) is the number of locations in memory between the beginnings of successive array elements, measured in bytes or in units of the size of the array’s elements. The stride cannot be smaller than the element size but can be larger, indicating extra space between elements. So basically here Strides are a list of integers: the k-th stride represents the jump in the memory necessary to go from one element to the next one in the kth dimension of the Tensor. This concept makes it possible to perform many tensor operations efficiently.

Let’s run some example snippets:

Python3

x = torch.Tensor([[1, 2, 3, 4], [5, 7, 8, 9]])
x.stride()
                      
                       

Output:

(4,1)

2. torch.sparse_coo_tensor: Used to store array in the sparse coordinate list. In COO format, the specified elements are stored as tuples of element indices and the corresponding values. 

Python3

i = [[0, 1, 1],
     [2, 0, 2]]
  
v =  [3, 4, 5]
s = torch.sparse_coo_tensor(i, v, (2, 3))
Print(s)
                      
                       

Output: 

tensor(indices=tensor([[0, 1, 1],                         [2, 0, 2]]),         values=tensor([3, 4, 5]),         size=(2, 3), nnz=3, layout=torch.sparse_coo)

Tensor operations :

You can add two tensors like matrix addition.

Python3

x = torch.tensor([1., 2., 3.])
y = torch.tensor([4., 5., 6.])
z = x + y
print(z)
                      
                       

Output: 

tensor([5., 7., 9.])
  • torch.cat() : Concatenates a list of tensors

Python3

x_1 = torch.randn(2, 5)
y_1 = torch.randn(3, 5)
z_1 = torch.cat([x_1, y_1])
print(z_1)
                      
                       

Output: 

tensor([[ 0.5761,  0.6781,  0.1621,  0.4986,  0.3410],          [-0.8428,  0.2510, -0.2668, -1.1475,  0.5675],          [-0.2797, -0.0699,  2.8936,  1.8260,  2.1227],          [ 1.3765, -0.0939, -0.3774, -0.3834,  0.0682],          [ 2.3666,  0.0904,  0.7956,  1.2281,  0.5561]])

To concatenate columns you can do the following.

Python3

x_2 = torch.randn(2, 3)
y_2 = torch.randn(2, 5)
  
# second argument specifies which axis to concat along
z_2 = torch.cat([x_2, y_2], 1)
print(z_2)
                      
                       

Output:

tensor([[ 0.5818,  0.7047,  0.1581,  1.8658,  0.5953, -0.9453, -0.6395, -0.7106],          [ 1.2197,  0.8110, -1.6072,  0.1463,  0.4895, -0.8226, -0.1889,  0.2668]])
  • view(): You can reshape tensors using the .view() method as shown below.

Python3

x = torch.randn(2, 3, 4)
print(x)
  
# reshape to 2 rows, 12 columns
print(x.view(2, 12))
                      
                       

Output:

tensor([[[ 0.4321,  0.2414, -0.4776,  1.6408],           [ 0.9085,  0.9195,  0.1321,  1.1891],           [-0.9267, -0.1384,  0.0115, -0.4731]],            [[ 0.7256,  0.6990, -1.7374,  0.6053],           [ 0.0224, -1.2108,  0.1974,  0.0655],           [-0.6182, -0.0797,  0.2603, -1.3280]]])  tensor([[ 0.4321,  0.2414, -0.4776,  1.6408,  0.9085,  0.9195,  0.1321,  1.1891,           -0.9267, -0.1384,  0.0115, -0.4731],          [ 0.7256,  0.6990, -1.7374,  0.6053,  0.0224, -1.2108,  0.1974,  0.0655,           -0.6182, -0.0797,  0.2603, -1.3280]])
  • torch.argmax(): Returns the index of the maximum value of all elements in the input tensor.

Python3

x = torch.randn(3,3)
print((x, torch.argmax(x)))
                      
                       

Output: 

(tensor([[ 1.9610, -0.7683, -2.6080],          [-0.3659, -0.1731,  0.1061],          [ 0.8582,  0.6420, -0.2380]]), tensor(0))
  • torch.argmin(): Similar to argmax(), it returns the minimum value of all elements in the input tensor.

Python3

x = torch.randn(3,3)
print((x, torch.argmin(x)))
                      
                       

Output: 

(tensor([[ 0.9838, -1.2761,  0.2257],          [-0.4754,  1.2677,  1.1973],          [-1.2298, -0.5710, -1.3635]]), tensor(8))


Next Article
Two-Dimensional Tensors in Pytorch
author
fallenvalkyrie
Improve
Article Tags :
  • Python
  • Python-PyTorch
Practice Tags :
  • python

Similar Reads

  • Tensor Operations in PyTorch
    In this article, we will discuss tensor operations in PyTorch. PyTorch is a scientific package used to perform operations on the given data like tensor in python. A Tensor is a collection of data like a numpy array. We can create a tensor using the tensor function: Syntax: torch.tensor([[[element1,e
    5 min read
  • Creating a Tensor in Pytorch
    All the deep learning is computations on tensors, which are generalizations of a matrix that can be indexed in more than 2 dimensions. Tensors can be created from Python lists with the torch.tensor() function. The tensor() Method: To create tensors with Pytorch we can simply use the tensor() method:
    6 min read
  • Reshaping a Tensor in Pytorch
    In this article, we will discuss how to reshape a Tensor in Pytorch. Reshaping allows us to change the shape with the same data and number of elements as self but with the specified shape, which means it returns the same data as the specified array, but with different specified dimension sizes. Crea
    7 min read
  • PyTorch Tensor vs NumPy Array
    PyTorch and NumPy can help you create and manipulate multidimensional arrays. This article covers a detailed explanation of how the tensors differ from the NumPy arrays. What is a PyTorch Tensor?PyTorch tensors are the data structures that allow us to handle multi-dimensional arrays and perform math
    8 min read
  • Two-Dimensional Tensors in Pytorch
    PyTorch is a python library developed by Facebook to run and train machine learning and deep learning models. In PyTorch everything is based on tensor operations. Two-dimensional tensors are nothing but matrices or vectors of two-dimension with specific datatype, of n rows and n columns. Representat
    3 min read
  • Way to Copy a Tensor in PyTorch
    In deep learning, PyTorch has become a popular framework for building and training neural networks. At the heart of PyTorch is the tensor—a multi-dimensional array that serves as the fundamental building block for all operations in the framework. There are many scenarios where you might need to copy
    5 min read
  • Python - PyTorch is_tensor() method
    PyTorch torch.is_tensor() method returns True if the passed object is a PyTorch tensor. Syntax: torch.is_tensor(object) Arguments object: This is input tensor to be tested. Return: It returns either True or False. Let's see this concept with the help of few examples: Example 1: # Importing the PyTor
    1 min read
  • Change view of Tensor in PyTorch
    In this article, we will learn how to change the shape of tensors using the PyTorch view function. We will also look at the multiple ways in which we can change the shape of the tensors. Also, we can use the view function to convert lower-dimensional matrices to higher dimensions. What is the necess
    3 min read
  • One-Dimensional Tensor in Pytorch
    In this article, we are going to discuss a one-dimensional tensor in Python. We will look into the following concepts: Creation of One-Dimensional TensorsAccessing Elements of TensorSize of TensorData Types of Elements of TensorsView of TensorFloating Point TensorIntroduction The Pytorch is used to
    5 min read
  • How to resize a tensor in PyTorch?
    In this article, we will discuss how to resize a Tensor in Pytorch. Resize allows us to change the size of the tensor. we have multiple methods to resize a tensor in PyTorch. let's discuss the available methods. Method 1: Using view() method We can resize the tensors in PyTorch by using the view() m
    5 min read
geeksforgeeks-footer-logo
Corporate & Communications Address:
A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305)
Registered Address:
K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305
GFG App on Play Store GFG App on App Store
Advertise with us
  • Company
  • About Us
  • Legal
  • Privacy Policy
  • In Media
  • Contact Us
  • Advertise with us
  • GFG Corporate Solution
  • Placement Training Program
  • Languages
  • Python
  • Java
  • C++
  • PHP
  • GoLang
  • SQL
  • R Language
  • Android Tutorial
  • Tutorials Archive
  • DSA
  • Data Structures
  • Algorithms
  • DSA for Beginners
  • Basic DSA Problems
  • DSA Roadmap
  • Top 100 DSA Interview Problems
  • DSA Roadmap by Sandeep Jain
  • All Cheat Sheets
  • Data Science & ML
  • Data Science With Python
  • Data Science For Beginner
  • Machine Learning
  • ML Maths
  • Data Visualisation
  • Pandas
  • NumPy
  • NLP
  • Deep Learning
  • Web Technologies
  • HTML
  • CSS
  • JavaScript
  • TypeScript
  • ReactJS
  • NextJS
  • Bootstrap
  • Web Design
  • Python Tutorial
  • Python Programming Examples
  • Python Projects
  • Python Tkinter
  • Python Web Scraping
  • OpenCV Tutorial
  • Python Interview Question
  • Django
  • Computer Science
  • Operating Systems
  • Computer Network
  • Database Management System
  • Software Engineering
  • Digital Logic Design
  • Engineering Maths
  • Software Development
  • Software Testing
  • DevOps
  • Git
  • Linux
  • AWS
  • Docker
  • Kubernetes
  • Azure
  • GCP
  • DevOps Roadmap
  • System Design
  • High Level Design
  • Low Level Design
  • UML Diagrams
  • Interview Guide
  • Design Patterns
  • OOAD
  • System Design Bootcamp
  • Interview Questions
  • Inteview Preparation
  • Competitive Programming
  • Top DS or Algo for CP
  • Company-Wise Recruitment Process
  • Company-Wise Preparation
  • Aptitude Preparation
  • Puzzles
  • School Subjects
  • Mathematics
  • Physics
  • Chemistry
  • Biology
  • Social Science
  • English Grammar
  • Commerce
  • World GK
  • GeeksforGeeks Videos
  • DSA
  • Python
  • Java
  • C++
  • Web Development
  • Data Science
  • CS Subjects
@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved
We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Lightbox
Improvement
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
geeksforgeeks-suggest-icon
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
geeksforgeeks-improvement-icon
Suggest Changes
min 4 words, max Words Limit:1000

Thank You!

Your suggestions are valuable to us.

What kind of Experience do you want to share?

Interview Experiences
Admission Experiences
Career Journeys
Work Experiences
Campus Experiences
Competitive Exam Experiences