| Decision Boundary without Probability
앞에서 살펴봤던 것과는 달리,
확률 개념을 다 빼고. Decision Boundary 를 한 번 생각해보자.
Decision boundary line
$w \cdot x + b = 0$
Positive case
$w \cdot x + b > 0$
Negative case
$w \cdot x + b < 0$
Confidence level
$(w \cdot x_i + b) y_i$ ... 항상 양수 ... 이걸 최대한 높이는 것이 목적
(Labeling with pos. of "+1" and neg. of "-1" )
Margin
Perpendicular distance from the closest point to the decision boundary
| Margin Distance
Optimization Problem
$$ max_{w,b}\ 2r=\frac{a}{\left \| w \right \|} \ \ \ s.t. \ \ (wx_j+b)y_j\ \geqslant a, \ \forall j $$
Optimization Problem (Normalized with $a$)
$$ min_{w,b}\ \left \| w \right \| \ \ \ s.t. \ \ (wx_j+b)y_j\ \geqslant 1, \ \forall j $$
Quadratic Optimization ... $ \left \| w \right \| = \sqrt{w_1^2+w_2^2} $
→ Quadratic Programming 과 같은 방법으로 최적화 수행!
| SVM with Hard Margin
Hard margin: No error cases are allowed
Soft margen: Some error cases are allowed
1. Linear line 유지하고, 일부 Error 는 허용 (Soft margin)
2. Kernel trick 적용, 어떠한 Error 도 허용 X (Hard margin)
Option 1
Admit there will be an error
Represent the error in our problem formulation (peneralization)
Try to reduce the error as well
Option 2
Make decision boundary more complex
Go to non-linear line
| Error handling in SVM
0 - 1 Loss
$ min_{w,b}\ \left \| w \right \| + C \cdot Error\ Count $
$ s.t. \ \ (wx_j+b)y_j\ \geqslant 1, \ \forall j $
Hinge Loss
$ min_{w,b}\ \left \| w \right \| + C \cdot \sum_{j} \xi _j $
$ s.t. \ \ (wx_j+b)y_j\ \geqslant 1-\xi_j, \ \forall j $
$ \xi_j \geqslant 0,\ \forall j $
다 좋은데 C 라는 추가 변수가 발생한다는 문제 발생
| Others
아래의 강의는 훑어보고 넘어가겠습니다.
Lecture 5 Soft Margin with SVM
Lecture 6 Rethinking of SVM
Lecture 7 Primal, Dual with KKT Condition
Lecture 8 Kernel
Lecture 9 SVM with Kernel
... Constrained optimization
... Dual problem with KKT conditions
... Mapping function and Kernel function
Reference
문일철 교수님 강의
https://www.youtube.com/watch?v=oNTXMgqCv6E&list=PLbhbGI_ppZISMV4tAWHlytBqNq1-lb8bz&index=23
https://www.youtube.com/watch?v=oNTXMgqCv6E&list=PLbhbGI_ppZISMV4tAWHlytBqNq1-lb8bz&index=24
https://www.youtube.com/watch?v=oNTXMgqCv6E&list=PLbhbGI_ppZISMV4tAWHlytBqNq1-lb8bz&index=25
https://www.youtube.com/watch?v=oNTXMgqCv6E&list=PLbhbGI_ppZISMV4tAWHlytBqNq1-lb8bz&index=26
https://www.youtube.com/watch?v=oNTXMgqCv6E&list=PLbhbGI_ppZISMV4tAWHlytBqNq1-lb8bz&index=27
https://www.youtube.com/watch?v=oNTXMgqCv6E&list=PLbhbGI_ppZISMV4tAWHlytBqNq1-lb8bz&index=28
https://www.youtube.com/watch?v=oNTXMgqCv6E&list=PLbhbGI_ppZISMV4tAWHlytBqNq1-lb8bz&index=29
https://www.youtube.com/watch?v=oNTXMgqCv6E&list=PLbhbGI_ppZISMV4tAWHlytBqNq1-lb8bz&index=30
https://www.youtube.com/watch?v=oNTXMgqCv6E&list=PLbhbGI_ppZISMV4tAWHlytBqNq1-lb8bz&index=31
'Study > Lecture - Basic' 카테고리의 다른 글
W7.L1-2. Bayesian Network - Probability Concepts (0) | 2023.05.07 |
---|---|
W6.L1-7. Training Testing and Regularization (0) | 2023.05.07 |
W4.StatQuest. MLE, Gaussian Naive Bayes (0) | 2023.05.06 |
W4.L7-8. Logistic Regression - Naive Bayes to Logistic Regression (0) | 2023.05.06 |
W4.L1-6. Logistic Regression - Decision Boundary, Gradient Method (0) | 2023.04.30 |