## SVM- The large margin classifier

- SVM is all about finding the maximum-margin Classifier.
- Classifier is a generic name, its actually called the hyper plane.
- Hyper plane: In 3-dimensional system hyperplanes are the 2-dimensional planes, in 2-dimensional space its hyperplanes are the 1-dimensional lines.

- SVM algorithm makes use of the nearest training examples to derive the classifier with maximum margin.
- Each data point is considered as a p-dimensional vector (a list of p numbers).
- SVM uses vector algebra and mathematical optimization to find the optimal hyperplane that has maximum margin.

### The SVM Algorithm

- If a dataset is linearly separable then we can always find a hyperplane f(x) such that
- For all negative labeled records f(x)<0
- For all positive labeled records f(x)>0
- This hyper plane f(x) is nothing but the linear classifier
- f(x)=w1x1+w2x2+b
- f(x)=wTx+b

## Math behind SVM Algorithm

### SVM Algorithm – The Math

If you already understood the SVM technique and If you find this slide is too technical, you may want to skip it. The tool will take care of this optimization.

- f(x)=wTx+b
- wTx++b=1 and wTx−+b=−1
- x+=x−+λw
- wTx++b=1
- wT(x−+λw)+b=1
- wTx−+λw.w+b=1
- −1+λw.w=1
- λ=2/w.w

- m=|x+−x−|
- m=|λw|
- m=(2/w.w)∗|w|
- m=2/||w||

- Objective is to maximize 2/||w||
- i.e minimize ||w||

- A good decision boundary should be
- wTx++b>=1 for all y=1
- wTx−+b<=−1 for all y=-1
- i.e y∗(wTx+b)>=1 for all points

- Now we have the optimization problem with objective and constraints
- minimize ||w|| or (½)∗||w||2
- With constant y(wTx+b)>=1

- We can solve the above optimization problem to obtain w & b

### SVM Result

- SVM doesn’t output probability. It directly gives which class the new data point belongs to
- For a new point xk

calculate $w^T x_k +b. If this value is positive then the prediction is +1 else -1.