Skip to content
geeksforgeeks
  • Tutorials
    • Python
    • Java
    • Data Structures & Algorithms
    • ML & Data Science
    • Interview Corner
    • Programming Languages
    • Web Development
    • CS Subjects
    • DevOps And Linux
    • School Learning
    • Practice Coding Problems
  • Courses
    • DSA to Development
    • Get IBM Certification
    • Newly Launched!
      • Master Django Framework
      • Become AWS Certified
    • For Working Professionals
      • Interview 101: DSA & System Design
      • Data Science Training Program
      • JAVA Backend Development (Live)
      • DevOps Engineering (LIVE)
      • Data Structures & Algorithms in Python
    • For Students
      • Placement Preparation Course
      • Data Science (Live)
      • Data Structure & Algorithm-Self Paced (C++/JAVA)
      • Master Competitive Programming (Live)
      • Full Stack Development with React & Node JS (Live)
    • Full Stack Development
    • Data Science Program
    • All Courses
  • Data Science
  • Data Science Projects
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • ML Projects
  • Deep Learning
  • NLP
  • Computer Vision
  • Artificial Intelligence
Open In App
Next Article:
Orthogonal Projections
Next article icon

Orthogonal Projections

Last Updated : 16 Oct, 2021
Comments
Improve
Suggest changes
Like Article
Like
Report

Orthogonal Sets:

A set of vectors \left \{ u_1, u_2, ... u_p \right \}    in \mathbb{R^n}     is called orthogonal set, if u_i \cdot u_j =0    . if i \neq j

Orthogonal Basis

An orthogonal basis for a subspace W of \mathbb{R^n}     is a basis for W that is also an orthogonal set.

Let S = \left \{ u_1, u_2, ... u_p \right \}     be the orthogonal basis for a W of \mathbb{R^n}     is a basis for W that is also a orthogonal set. We need to calculate c_1, c_2, ... c_p     such that :

y = c_1 u_1 + c_2 u_2 + ... c_p u_p

Let's take the dot product of u_1 both side.

y \cdot u_1 = (c_1 u_1 + c_2 u_2 + ... c_p u_p) \cdot u_1

y \cdot u_1 = c_1 (u_1 \cdot u_1) + c_2 (u_2 \cdot u_1) + ... c_p (u_p \cdot u_1)

Since, this is orthogonal basis u_2 \cdot u_1 = u_3 \cdot u_1 = ... =  u_p \cdot u_1 =0    . This gives c_1    :

c_1 = \frac{y \cdot u_1}{ u_1 \cdot u_1}

We can generalize the above equation

c_j = \frac{y \cdot u_j}{ u_j \cdot u_j}

Orthogonal Projections

Suppose {u_1, u_2,... u_n} is an orthogonal basis for W in \mathbb{R^n}    . For each y in W:

y =\left ( \frac{y \cdot u_1}{u_1 \cdot u_1} \right ) u_1 + ... +  \left ( \frac{y \cdot u_p}{u_p \cdot u_p } \right )u_p

Let's take \left \{ u_1, u_2, u_3  \right \}     is an orthogonal basis for \mathbb{R^3}     and W = span \left \{ u_1, u_2 \right \}    . Let's try to write a write y in the form \hat{y}     belongs to W space, and z that is orthogonal to W.

y =\left ( \frac{y \cdot u_1}{u_1 \cdot u_1} \right ) u_1  +  \left ( \frac{y \cdot u_2}{u_2\cdot u_2 } \right )u_2 + \left ( \frac{y \cdot u_3}{u_3 \cdot u_3} \right ) u_3  \\

where 

\hat{y} = \left ( \frac{y \cdot u_1}{u_1 \cdot u_1} \right ) u_1  +  \left ( \frac{y \cdot u_2}{u_2\cdot u_2 } \right )u_2

and

z = \left ( \frac{y \cdot u_3}{u_3 \cdot u_3} \right ) u_3     y= \hat{y} + z

Now, we can see that z is orthogonal to both u_1     and u_2     such that:

z \cdot u_1 =0 \\ z \cdot u_2 =0

Orthogonal Decomposition Theorem:

Let W be the subspace of \mathbb{R^n}    . Then each y in \mathbb{R^n}     can be uniquely represented in the form:

y = \hat{y} + z

where \hat{y}    is in W and z in W^{\perp}. If \left \{ u_1, u_2, ... u_p \right \}      is an orthogonal basis of W. then,

\hat{y} =\left ( \frac{y \cdot u_1}{u_1 \cdot u_1} \right ) u_1 + ... +  \left ( \frac{y \cdot u_p}{u_p \cdot u_p } \right )u_p

thus:

z = y - \hat{y}

Then, \hat{y}    is the orthogonal projection of y in W.

Best Approximation Theorem 

Let W is the subspace of \mathbb{R^n}    , y any vector in \mathbb{R^n}    . Let v in W and different from \hat{y}     . Then \left \| v-\hat{y} \right \|     also in W.

z = y - \hat{y}     is orthogonal to W, and also orthogonal to v=\hat{y}    .  Then y-v  can be written as:

y-v =  (y- \hat{y}) + (\hat{y} -v)

Thus:

\left \| y-v \right \|^{2} = \left \| y- \hat{y} \right \|^{2} + \left \| \hat{y}-v \right \|^{2} 

Thus, this can be written as:

\left \| y-v \right \|^{2} > \left \| y- \hat{y} \right \|^{2}

and

\left \| y-v \right \| > \left \| y- \hat{y} \right \|

References:

  • University of Houston Orthogonal Set
  • University of Houston Orthogonal Projection

Next Article
Orthogonal Projections

P

pawangfg
Improve
Article Tags :
  • Machine Learning
  • AI-ML-DS
Practice Tags :
  • Machine Learning

Similar Reads

    Gram Schmidt Process for ML
    Gram-Schmidt Process is used to convert a set of vectors into an orthonormal basis. It converts a set of linearly independent vectors into a set of orthogonal vectors, which are also normalized to one unit of length. This process is important in most fields of machine learning because it assists in
    5 min read
    Orthogonal Matching Pursuit (OMP) using Sklearn
    In this article, we will delve into the Orthogonal Matching Pursuit (OMP), exploring its features and advantages. What is Orthogonal Matching Pursuit?The Orthogonal Matching Pursuit (OMP) algorithm in Python efficiently reconstructs signals using a limited set of measurements. OMP intelligently sele
    6 min read
    Mathematical Approach to PCA
    The main guiding principle for Principal Component Analysis is FEATURE EXTRACTION i.e. "Features of a data set should be less as well as the similarity between each other is very less." In PCA, a new set of features are extracted from the original features which are quite dissimilar in nature. So, a
    8 min read
    Sparse Coding with a Precomputed Dictionary in Scikit Learn
    A sparse array/matrix is a special type of matrix whose most of the elements are having a value of zero. Generally, the number of non-zero elements in a sparse matrix is equal to the number of rows or columns of the matrix. So, sparse coding can be defined as a representation learning method that is
    5 min read
    QR Decomposition in Machine learning
    QR decomposition is a way of expressing a matrix as the product of two matrices: Q (an orthogonal matrix) and R (an upper triangular matrix). In this article, I will explain decomposition in Linear Algebra, particularly QR decomposition among many decompositions. What is QR Decomposition?Decompositi
    9 min read
geeksforgeeks-footer-logo
Corporate & Communications Address:
A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305)
Registered Address:
K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305
GFG App on Play Store GFG App on App Store
Advertise with us
  • Company
  • About Us
  • Legal
  • Privacy Policy
  • In Media
  • Contact Us
  • Advertise with us
  • GFG Corporate Solution
  • Placement Training Program
  • Languages
  • Python
  • Java
  • C++
  • PHP
  • GoLang
  • SQL
  • R Language
  • Android Tutorial
  • Tutorials Archive
  • DSA
  • Data Structures
  • Algorithms
  • DSA for Beginners
  • Basic DSA Problems
  • DSA Roadmap
  • Top 100 DSA Interview Problems
  • DSA Roadmap by Sandeep Jain
  • All Cheat Sheets
  • Data Science & ML
  • Data Science With Python
  • Data Science For Beginner
  • Machine Learning
  • ML Maths
  • Data Visualisation
  • Pandas
  • NumPy
  • NLP
  • Deep Learning
  • Web Technologies
  • HTML
  • CSS
  • JavaScript
  • TypeScript
  • ReactJS
  • NextJS
  • Bootstrap
  • Web Design
  • Python Tutorial
  • Python Programming Examples
  • Python Projects
  • Python Tkinter
  • Python Web Scraping
  • OpenCV Tutorial
  • Python Interview Question
  • Django
  • Computer Science
  • Operating Systems
  • Computer Network
  • Database Management System
  • Software Engineering
  • Digital Logic Design
  • Engineering Maths
  • Software Development
  • Software Testing
  • DevOps
  • Git
  • Linux
  • AWS
  • Docker
  • Kubernetes
  • Azure
  • GCP
  • DevOps Roadmap
  • System Design
  • High Level Design
  • Low Level Design
  • UML Diagrams
  • Interview Guide
  • Design Patterns
  • OOAD
  • System Design Bootcamp
  • Interview Questions
  • Inteview Preparation
  • Competitive Programming
  • Top DS or Algo for CP
  • Company-Wise Recruitment Process
  • Company-Wise Preparation
  • Aptitude Preparation
  • Puzzles
  • School Subjects
  • Mathematics
  • Physics
  • Chemistry
  • Biology
  • Social Science
  • English Grammar
  • Commerce
  • World GK
  • GeeksforGeeks Videos
  • DSA
  • Python
  • Java
  • C++
  • Web Development
  • Data Science
  • CS Subjects
@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved
We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Lightbox
Improvement
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
geeksforgeeks-suggest-icon
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
geeksforgeeks-improvement-icon
Suggest Changes
min 4 words, max Words Limit:1000

Thank You!

Your suggestions are valuable to us.

What kind of Experience do you want to share?

Interview Experiences
Admission Experiences
Career Journeys
Work Experiences
Campus Experiences
Competitive Exam Experiences