Skip to content
geeksforgeeks
  • Tutorials
    • Python
    • Java
    • Data Structures & Algorithms
    • ML & Data Science
    • Interview Corner
    • Programming Languages
    • Web Development
    • CS Subjects
    • DevOps And Linux
    • School Learning
    • Practice Coding Problems
  • Courses
    • DSA to Development
    • Get IBM Certification
    • Newly Launched!
      • Master Django Framework
      • Become AWS Certified
    • For Working Professionals
      • Interview 101: DSA & System Design
      • Data Science Training Program
      • JAVA Backend Development (Live)
      • DevOps Engineering (LIVE)
      • Data Structures & Algorithms in Python
    • For Students
      • Placement Preparation Course
      • Data Science (Live)
      • Data Structure & Algorithm-Self Paced (C++/JAVA)
      • Master Competitive Programming (Live)
      • Full Stack Development with React & Node JS (Live)
    • Full Stack Development
    • Data Science Program
    • All Courses
  • Data Science
  • Data Science Projects
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • ML Projects
  • Deep Learning
  • NLP
  • Computer Vision
  • Artificial Intelligence
Open In App
Next Article:
Separating Hyperplanes in SVM
Next article icon

Separating Hyperplanes in SVM

Last Updated : 15 Sep, 2021
Comments
Improve
Suggest changes
Like Article
Like
Report

 Support Vector Machine is the supervised machine learning algorithm, that is used in both classification and regression of models. The idea behind it is simple to just find a plane or a boundary that separates the data between two classes.

Support Vectors:

Support vectors are the data points that are close to the decision boundary, they are the data points most difficult to classify, they hold the key for SVM to be optimal decision surface. The optimal hyperplane comes from the function class with the lowest capacity i.e minimum number of independent features/parameters.

Separating Hyperplanes:

Below is an example of a scatter plot:

In the above scatter, Can we find a line that can separate two categories. Such a line is called separating hyperplane. So, why it is called a hyperplane, because in 2-dimension, it's a line but for 1-dimension it can be a point, for 3-dimension it is a plane, and for 3 or more dimensions it is a hyperplane

Now, we understand the hyperplane, we also need to find the most optimized hyperplane. The idea behind that this hyperplane should farthest from the support vectors. This distance b/w separating hyperplanes and support vector known as margin. Thus, the best hyperplane will be whose margin is the maximum.

Generally, the margin can be taken as 2*p, where p is the distance b/w separating hyperplane and nearest support vector. Below is the method to calculate linearly separable hyperplane. 

A separating hyperplane can be defined by two terms: an intercept term called b and a decision hyperplane normal vector called w. These are commonly referred to as the weight vector in machine learning. Here b is used to select the hyperplane i.e perpendicular to the normal vector. Now since all the plane x in the hyperplane should satisfy the following equation:

w^{T} \cdot x = -b 

Now, consider the training D such that \mathbb{D} = \left \{ \left ( \vec{x_i}, y_i \right ) \right \} where \vec{x_i} \, y_i represents the n-dimesnsional data point and class label respectively. The value of class label here can be only either be -1 or +1 (for 2-class problem). The linear classifier is then:

f(\vec{x}) =  sign(\vec{w^{T}}\vec{x} + b) 

However, the functional margin is by definition of above is unconstraint, so we need to formulize the distance b/w a data point x and the decision boundary. The shortest distance b/w them is of course the perpendicular distance i.e parallel to the normal vector \vec{w}. A unitary vector in the direction of this normal vector is given by \frac {\vec{w}}{\left \| \vec{w} \right \|}. Now, 

\vec{x^{'}}            can be defined as: 

\vec{x^{'}} =  \vec{x} - yr\frac{\vec{w}}{\left \|  \vec{w} \right \|} 

Replace x' by x in the linear classifier equation gives: 

\vec{w^{T}}\left (  \vec{x} - yr\frac{\vec{w}}{\left \|  \vec{w} \right \|}\right ) + b = 0 

Now, solving for r gives following equation: 

r =  y \frac{\vec{w^{T}}\vec{x} + b}{\left \| \vec{w} \right \|} 

where, r is the margin. Now, since the 

\left \| w \right \| = 1           . The distance equation for a data point to hyperplane for all items in the data could be written as: 

y_i(\vec{w^{T}}\vec{x_i} +b) \geq 1 

or, the above equation for each data point: 

r_i =  y_i \frac{\vec{w^{T}}\vec{x_i} + b}{\left \| \vec{w} \right \|} 

Here, the geometric margin is: 

\rho = \frac{2}{||\vec{w}||} 

We need to maximize the geometric margin such that: 

\rho =\frac{2}{\left \| \vec{w} \right \|} \forall \left ( \vec{x_i}, y_i \right) \in \mathbb(D); y_i(\vec{w^{T}}\vec{x_i} +b) \geq 1 

Maximizing \rho         is same as minimizing the \frac{1}{\rho}  = \frac{||w||}{2} that is, we need to find w and b such that: 

\frac{1}{2}\vec{w^{T}}\vec{w}            is minimum \forall \left ( \vec{x_i}, y_i \right) \in \mathbb{D}; y_i(\vec{w^{T}}\vec{x_i} +b) \geq 1 

Here, we are optimizing a quadratic equation with linear constraint. Now, this leads us to find the solution dual problems.  

Duality Problem:

In optimization, the duality principle states that optimization problems can either be viewed from a different perspective: the primal problem and the dual problem The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem.

An optimization problem can be typically written as:

minimize_{x} \, \, f(x) \\ subject \, to \,\, g_i (x)  =0 ,  \,\,\, i= 1,....,p \\ h_i(x) \leq 0 \,\, i= 1,....,m 

where, f is objective function g and h are constraint function. The above problem can be solved by a technique such as Lagrange multipliers. 

Lagrange multipliers

Lagrange multiplier is a way of finding local minima and maxima for the functions with an equality constraint. Lagrange multipliers can be described for the 

In Lagrangian equation: 

\nabla f (x,y)  =  \nabla \lambda g(x,y) \\ or \\ \nabla f (x,y) - \nabla \lambda g(x,y) =0 \\ 

Suppose, we define the function such that 

\nabla L(x,y, \lambda) = \nabla f (x,y) - \nabla \lambda g(x,y) 

The above function is known as Lagrangian, now, we need to find \nabla L(x,y, \lambda) is 0 i.e point where gradient of functions f and g are parallel. 

Example

Consider having three points with points (1,2) and (2,0) belonging to one class and (3,2) belonging to another, geometrically, we can observe that the maximum margin line will be parallel to line connecting points of the two classes. (1,1) and (2,3) given a weight vector as (1,2). The optimal decision surface (separating hyperplane) will intersect at (1.5,2). Now, we can calculate bias using this conclusion: 

y  =  x_1 + 2x_2 +b \\ \\ y=0 \\ x_1 =1.5 \\ x_2 =2 \\ \\ 0 = 1.5+ 4 +b \\ \\ b=- 5.5 

.Now, the decision surface equation becomes: 

y = x_1 + 2x_2 -5.5 

Now, since sign(y_i(w^{T}x_i +b)) \geq 1 , to minimize the |\vec{w}|, we need to check for the equality constraint or the support vectors. Let's take w=(a, 2a) for some a such that: 

a + 2a + b  =  -1  \, for \, point \,(1,1)  \\ 2a + 6a + b =  1 \, for \, point \,(2,3)  \\ 

Solving above equation gives: 

a =\frac{2}{5}; \, b= \frac{-11}{5} 

this means the margin becomes: 

\rho = \frac{2}{||\vec{w}||} \\ = \frac{2}{\sqrt{\frac{4}{25}+ \frac{16}{25}}} \\ = \frac{2}{\frac{2\sqrt{5}}{5}} \\ = \sqrt{5}\\ 2.23 

Implementation

In this implementation, we will verify the above example using the sklearn library and tried to model the above example: 

Python3
# Import Necessary libraries/functions import numpy as np import matplotlib.pyplot as plt from sklearn.svm import SVC  # define the dataset X = np.array([[1,1],              [2,0],              [2,3]]) Y = np.array([0,0,1])  # define support vector classifier with linear kernel  clf = SVC(gamma='auto', kernel ='linear')  # fit the above data in SVC clf.fit(X,Y)  # plot the decision boundary ,data points,support vector etcv w = clf.coef_[0] a = -w[0] / w[1]  xx = np.linspace(0,12) yy = a * xx - clf.intercept_[0] / w[1] y_neg = a * xx - clf.intercept_[0] / w[1] + 1 y_pos = a * xx - clf.intercept_[0] / w[1] - 1 plt.figure(1,figsize= (15, 10)) plt.plot(xx, yy, 'k',           label=f"Decision Boundary (y ={w[0]}x1  + {w[1]}x2  {clf.intercept_[0] })") plt.plot(xx, y_neg, 'b-.',           label=f"Neg Decision Boundary (-1 ={w[0]}x1  + {w[1]}x2  {clf.intercept_[0] })") plt.plot(xx, y_pos, 'r-.',           label=f"Pos Decision Boundary (1 ={w[0]}x1  + {w[1]}x2  {clf.intercept_[0] })")  for i in range(3):   if (Y[i]==0):       plt.scatter(X[i][0], X[i][1],color='red', marker='o', label='negative')   else:       plt.scatter(X[i][0], X[i][1],color='green', marker='x', label='positive') plt.legend() plt.show()  # calculate margin print(f'Margin : {2.0 /np.sqrt(np.sum(clf.coef_ ** 2)) }') 

 
 

Margin : 2.236
FInal SVM decision boundary

References:

  • Stanford NLP


 


Next Article
Separating Hyperplanes in SVM

P

pawangfg
Improve
Article Tags :
  • Machine Learning
  • AI-ML-DS
Practice Tags :
  • Machine Learning

Similar Reads

    Maximum Margin Separating Hyperplane in Scikit Learn
    In sci-kit learn, the SVM (support vector machine) class provides a method for finding the MMSH. The SVM model is a supervised learning algorithm that can be used for both classification and regression tasks. When used for classification, the SVM model finds the MMSH that separates different classes
    5 min read
    SVM Feature Selection in R with Example
    In machine learning, SVM is often praised for its robustness and accuracy, particularly in binary classification problems. However, like any model, its performance can be heavily dependent on the input features. Effective feature selection not only simplifies the model by reducing the number of vari
    4 min read
    How to Avoid Overfitting in SVM?
    avoid overfittingSupport Vector Machine (SVM) is a powerful, supervised machine learning algorithm used for both classification and regression challenges. However, like any model, it can suffer from over-fitting, where the model performs well on training data but poorly on unseen data. When Does Ove
    7 min read
    ML | Non-Linear SVM
    Support Vector Machines (SVM) are algorithms for classification and regression tasks. However, the standard (linear) SVM can only classify data that is linearly separable, meaning the classes can be separated by a straight line (in 2D) or a hyperplane (in higher dimensions). Non-Linear SVM extends S
    6 min read
    Linear Separability with Python
    Linear Separability refers to the data points in binary classification problems which can be separated using linear decision boundary. if the data points can be separated using a line, linear function, or flat hyperplane are considered linearly separable. Linear separability is an important concept
    6 min read
geeksforgeeks-footer-logo
Corporate & Communications Address:
A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305)
Registered Address:
K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305
GFG App on Play Store GFG App on App Store
Advertise with us
  • Company
  • About Us
  • Legal
  • Privacy Policy
  • In Media
  • Contact Us
  • Advertise with us
  • GFG Corporate Solution
  • Placement Training Program
  • Languages
  • Python
  • Java
  • C++
  • PHP
  • GoLang
  • SQL
  • R Language
  • Android Tutorial
  • Tutorials Archive
  • DSA
  • Data Structures
  • Algorithms
  • DSA for Beginners
  • Basic DSA Problems
  • DSA Roadmap
  • Top 100 DSA Interview Problems
  • DSA Roadmap by Sandeep Jain
  • All Cheat Sheets
  • Data Science & ML
  • Data Science With Python
  • Data Science For Beginner
  • Machine Learning
  • ML Maths
  • Data Visualisation
  • Pandas
  • NumPy
  • NLP
  • Deep Learning
  • Web Technologies
  • HTML
  • CSS
  • JavaScript
  • TypeScript
  • ReactJS
  • NextJS
  • Bootstrap
  • Web Design
  • Python Tutorial
  • Python Programming Examples
  • Python Projects
  • Python Tkinter
  • Python Web Scraping
  • OpenCV Tutorial
  • Python Interview Question
  • Django
  • Computer Science
  • Operating Systems
  • Computer Network
  • Database Management System
  • Software Engineering
  • Digital Logic Design
  • Engineering Maths
  • Software Development
  • Software Testing
  • DevOps
  • Git
  • Linux
  • AWS
  • Docker
  • Kubernetes
  • Azure
  • GCP
  • DevOps Roadmap
  • System Design
  • High Level Design
  • Low Level Design
  • UML Diagrams
  • Interview Guide
  • Design Patterns
  • OOAD
  • System Design Bootcamp
  • Interview Questions
  • Inteview Preparation
  • Competitive Programming
  • Top DS or Algo for CP
  • Company-Wise Recruitment Process
  • Company-Wise Preparation
  • Aptitude Preparation
  • Puzzles
  • School Subjects
  • Mathematics
  • Physics
  • Chemistry
  • Biology
  • Social Science
  • English Grammar
  • Commerce
  • World GK
  • GeeksforGeeks Videos
  • DSA
  • Python
  • Java
  • C++
  • Web Development
  • Data Science
  • CS Subjects
@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved
We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Lightbox
Improvement
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
geeksforgeeks-suggest-icon
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
geeksforgeeks-improvement-icon
Suggest Changes
min 4 words, max Words Limit:1000

Thank You!

Your suggestions are valuable to us.

What kind of Experience do you want to share?

Interview Experiences
Admission Experiences
Career Journeys
Work Experiences
Campus Experiences
Competitive Exam Experiences