现在的位置: 首页 > 综合 > 正文

【stanford】二、supervised learning:Linear Regression

2012年05月25日 ⁄ 综合 ⁄ 共 517字 ⁄ 字号 评论关闭

supervised learning

image

learn a function h : X → Y

h is called a hypothesis.

一、Linear Regression

image

例子中,x是二维向量,x1代表living area,x2代表bedrooms

functions/hypotheses h

image

设x0 = 1,变换得

image

Now, given a training set, how do we pick, or learn, the parameters θ?现在变为求参数θ

One reasonable method seems to be to make h(x) close to y,

We define the cost function:定义损失函数:

image   要让这个函数值最小

1、LMS algorithm:Least mean square

We want to choose θ so as to minimize J(θ).

gradient descent algorithm

image

α is called the learning rate.

image

image  LMS update rule

image

called batch gradient descent

算法

image

每次循环中每个θ,比如θj都要更新m次,i=1,2,…m,m为训练集元素个数。

如果m太大,则这个算法会很慢,改用随机梯度下降法

抱歉!评论已关闭.