본문 바로가기

Studying/Deep Learning

[모두를 위한 딥러닝] 3일차

Hypothesis H(x)= Wx + b            W와 b를 학습

Cost function cost(W,b) = 1/m(시그마(H(xi)-yi)^2)

Gradient descent algorithm


multi-variables


H(x1,x2,x3) = w1x1 + w2x2 + w3x3

cost(W,b) = 1/m(시그마(H(x1i,x2i,x3i)-yi)^2)


너무 복잡해져서 해결법은 Matrix 이용


(x1 x2 x3) (   w1 )  = (x1w1 + x2w2 + x3w3)

w2

w3


H(X) = XW