1000字范文,内容丰富有趣,学习的好帮手!
1000字范文 > [机器学习-实践篇]学习之线性回归 岭回归 Lasso回归 tensorflow实现的线性回归

[机器学习-实践篇]学习之线性回归 岭回归 Lasso回归 tensorflow实现的线性回归

时间:2023-01-15 05:54:09

相关推荐

[机器学习-实践篇]学习之线性回归 岭回归 Lasso回归 tensorflow实现的线性回归

线性回归、岭回归、Lasso回归

前言1.线性回归2. 岭回归3. Lasso回归4. tensorflow利用梯度下降实现的线性回归

前言

本章主要介绍线性回归、岭回归、Lasso回归,tensorflow实现的线性回归的简单例子代码。

原理篇看这里

[机器学习-原理篇]学习之线性回归、岭回归、Lasso回归

1.线性回归

from sklearn import linear_modeldef test_linearRegression(X, y):clf = linear_model.LinearRegression()clf.fit(X, y)print('linearRegression coef:',clf.coef_)print('linearRegression intercept:',clf.intercept_)if __name__ == '__main__':X = [[0, 0], [1, 1], [2, 2.]]y = [[0], [1], [2.]]test_linearRegression(X, y)

linearRegression coef: [[0.5 0.5]]linearRegression intercept: [1.11022302e-16]

2. 岭回归

from sklearn import linear_modeldef test_ridge(X, y):clf = linear_model.Ridge(alpha=.1)clf.fit(X, y)print('ridge coef:',clf.coef_)print('ridge intercept',clf.intercept_)if __name__ == '__main__':X = [[0, 0], [1, 1], [2, 2.]]y = [[0], [1], [2.]]test_ridge(X, y)

ridge coef: [[0.48780488 0.48780488]]ridge intercept [0.02439024]

3. Lasso回归

from sklearn import linear_modeldef test_lasso(X, y):clf = linear_model.Lasso(alpha=0.1)clf.fit(X, y)print('lasso coef:',clf.coef_)print('lasso intercept: ',clf.intercept_)if __name__ == '__main__':X = [[0, 0], [1, 1], [2, 2.]]y = [[0], [1], [2.]]test_lasso(X, y)

从这个例子看出,第二特征的权重直接是0, 由此可以进一步得出结论

lasso 可以用来做 feature selection,而 ridge 不行。或者说,lasso 更容易使得权重变为 0,而 ridge 更容易使得权重接近 0。

lasso coef: [0.85 0. ]lasso intercept: [0.15]

4. tensorflow利用梯度下降实现的线性回归

import tensorflow as tfTRAIN_STEPS = 10def test_tensorflow1(X, y):w = tf.Variable(initial_value=[[1.0],[1.0]])#w2 = tf.Variable(initial_value=1.0)b = tf.Variable(initial_value=0.)optimizer = tf.keras.optimizers.SGD(0.1)mse = tf.keras.losses.MeanSquaredError()for i in range(TRAIN_STEPS):#print("epoch:", i)#print("w1:", w.numpy())#print("b:", b.numpy())with tf.GradientTape(persistent=True,watch_accessed_variables=True) as g:logit = tf.matmul(X, w) + bloss = mse(y, logit)#loss = (y - logit)*(y - logit)gradients = g.gradient(target=loss, sources=[w, b]) # 计算梯度#print(gradients)optimizer.apply_gradients(zip(gradients, [w, b])) # 更新梯度print("test_tensorflow1 w1:", w.numpy())print("test_tensorflow1 b:", b.numpy())def test_tensorflow2(X, y):w = tf.Variable(initial_value=[[1.0],[1.0]])#w2 = tf.Variable(initial_value=1.0)b = tf.Variable(initial_value=0.)optimizer = tf.keras.optimizers.SGD(0.1)mse = tf.keras.losses.MeanSquaredError()for i in range(TRAIN_STEPS):#print("epoch:", i)#print("w1:", w.numpy())#print("b:", b.numpy())with tf.GradientTape(persistent=True,watch_accessed_variables=True) as g:logit = tf.matmul(X, w) + bloss = tf.square((y - logit))loss = tf.reduce_sum(loss)/3gradients = g.gradient(target=loss, sources=[w, b]) # 计算梯度#print(gradients)optimizer.apply_gradients(zip(gradients, [w, b])) # 更新梯度print("test_tensorflow2 w1:", w.numpy())print("test_tensorflow2 b:", b.numpy())if __name__ == '__main__':X = [[0, 0], [1, 1], [2, 2.]]#X = [[0], [1], [2]]y = [[0], [1], [2.]]test_linearRegression(X, y)test_lasso(X, y)test_ridge(X, y)test_tensorflow1(X, y)test_tensorflow2(X, y)

test_tensorflow1 w1: [[0.5456011][0.5456011]]test_tensorflow1 b: -0.13680318test_tensorflow2 w1: [[0.5456011][0.5456011]]test_tensorflow2 b: -0.13680318

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。