博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
超简单tensorflow入门优化程序&&tensorboard可视化
阅读量:5085 次
发布时间:2019-06-13

本文共 7703 字,大约阅读时间需要 25 分钟。

程序1

任务描述: x = 3.0, y = 100.0, 运算公式 x×W+b = y,求 W和b的最优解。

使用tensorflow编程实现:

#-*- coding: utf-8 -*-)import tensorflow as tf# 声明占位变量x、yx = tf.placeholder("float",shape=[None,1])y = tf.placeholder("float",[None,1])# 声明变量W = tf.Variable(tf.zeros([1,1]))b = tf.Variable(tf.zeros([1]))# 操作result = tf.matmul(x,W) +b# 损失函数lost = tf.reduce_sum(tf.pow((result-y),2))# 优化train_step = tf.train.GradientDescentOptimizer(0.001).minimize(lost)with tf.Session() as sess:    # 初始化变量    sess.run(tf.global_variables_initializer())    # 这里x、y给固定的值    x_s = [[3.0]]    y_s = [[100.0]]    step =0    while(True):        step += 1        feed = {x: x_s, y: y_s}        # 通过sess.run执行优化        sess.run(train_step, feed_dict=feed)        if step % 50 ==0:            print 'step: {0},  loss: {1}'.format(step,sess.run(lost,feed_dict=feed))            if sess.run(lost,feed_dict=feed) < 0.00001 or step >3000:                print ''                print 'final loss is: {}'.format(sess.run(lost,feed_dict=feed))                print 'final result of {0} =  {1}'.format('x×W+b',3.0*sess.run(W)+sess.run(b))                print("W : %f" % sess.run(W))                print("b : %f" % sess.run(b))                break

输出:

step: 50,  loss: 1326.19543457step: 100,  loss: 175.879058838step: 150,  loss: 23.325012207step: 200,  loss: 3.09336590767step: 250,  loss: 0.410243988037step: 300,  loss: 0.0544071868062step: 350,  loss: 0.00721317622811step: 400,  loss: 0.000956638017669step: 450,  loss: 0.000126981700305step: 500,  loss: 1.68478582054e-05step: 550,  loss: 2.23610550165e-06final loss is: 2.23610550165e-06final result of x×W+b =  [[ 99.99850464]]W : 29.999552b : 9.999846

任务很简单,初始学习率设置为0.0001,550论迭代后优化完成,如果初始学习率设置的高一点,如0.005,会加快收敛。求得 W = 29.999552, b = 9.999846
x×W+b = 3.0×29.999552+9.999846 = 99.999846 ,约等于目标 100.0 了。

程序2

任务描述: x、y是二维矩阵, x = [[1.0, 3.0], [3.2, 4.]], y = [[6.0, 3.0], [5.2, 43.]], 运算公式 x×W+b = y,求 W和b的最优值。
# -*- coding: utf-8 -*-)import tensorflow as tf# 声明占位变量x、y, 形状为[2,2]x = tf.placeholder("float",shape=[2,2])y = tf.placeholder("float",[2,2])# 声明变量W = tf.Variable(tf.zeros([2,2]))b = tf.Variable(tf.zeros([1]))# 操作result = tf.matmul(x,W) +b# 损失函数lost = tf.reduce_sum(tf.pow((y-result),2))# 优化train_step = tf.train.GradientDescentOptimizer(0.001).minimize(lost)with tf.Session() as sess:    # 初始化变量    sess.run(tf.global_variables_initializer())    # 这里x、y给固定的值    x_s = [[1.0, 3.0], [3.2, 4.]]    y_s = [[6.0, 3.0], [5.2, 43.]]    step = 0    while(True):        step += 1        feed = {x: x_s, y: y_s}        # 通过sess.run执行优化        sess.run(train_step, feed_dict=feed)        if step % 500 == 0:            print 'step: {0},  loss: {1}'.format(step, sess.run(lost, feed_dict=feed))            if sess.run(lost, feed_dict=feed) < 0.00001 or step > 10000:                print ''                print 'final loss is: {}'.format(sess.run(lost, feed_dict=feed))                print("W : {}".format(sess.run(W)))                print("b : {}".format( sess.run(b)))                result1 = tf.matmul(x_s, W) + b                print 'final result is: {}'.format(sess.run(result1))                print 'final error is: {}'.format(sess.run(result1)-y_s)                break

输出:  

step: 500,  loss: 59.3428421021step: 1000,  loss: 8.97444725037step: 1500,  loss: 1.40089821815step: 2000,  loss: 0.22409722209step: 2500,  loss: 0.036496296525step: 3000,  loss: 0.00602086028084step: 3500,  loss: 0.00100283313077step: 4000,  loss: 0.000168772909092step: 4500,  loss: 2.86664580926e-05step: 5000,  loss: 4.90123693453e-06final loss is: 4.90123693453e-06W : [[ -2.12640238  20.26368904] [  3.87999701  -4.58247852]]b : [-3.51479006]final result is: [[  5.99879789   3.00146341] [  5.20070982  42.99909973]]final error is: [[-0.00120211  0.00146341] [ 0.00070982 -0.00090027]]

程序3    (增加可视化)

任务描述:  X是128个二维数组[X1, X2], Y是X中X1和X2 相关的函数,Y = x1 +10*x2, 运算公式  Y = (X*w1+b1)*(w2)+b2,求 w1、b1、w2、b2的最优值。
# -*- coding: utf-8 -*-)import tensorflow as tffrom numpy.random import RandomState# 定义训练数据batch的大小batch_size = 8# 在shape上使用None表示该维度的具体数值不定x = tf.placeholder(tf.float32, shape=(None, 2), name='x-input')y_ = tf.placeholder(tf.float32, shape=(None, 1), name='y-input')# 定义神经网络的参数w1 = tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1))w2 = tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1))bias1 = tf.Variable(tf.random_normal([3], stddev=1, seed=1))bias2 = tf.Variable(tf.random_normal([1], stddev=1, seed=1))# 定义神经网络前向传播的过程,即操作a = tf.nn.relu(tf.matmul(x, w1) + bias1)y = tf.nn.relu(tf.matmul(a, w2) + bias2)# 定义损失函数和反向传播算法loss = tf.reduce_sum(tf.pow((y-y_),2))train_step = tf.train.AdamOptimizer(0.001).minimize(loss)  # 梯度下降优化算法# produce the data,通过随机数生成一个模拟数据集rdm = RandomState(seed = 1)   # 设置seed = 1 ,使每次生成的随机数一样dataset_size = 128X = rdm.rand(dataset_size, 2)Y = [[x1 +10*x2] for (x1, x2) in X]# creare a session,创建一个会话来运行TensorFlow程序with tf.Session() as sess:    # 定义命名空间,使用tensorboard进行可视化    with tf.name_scope("inputs"):        tf.summary.histogram('X', X)    with tf.name_scope("target"):        tf.summary.histogram('Target', Y)    with tf.name_scope("outputs"):        tf.summary.histogram('Y', y)    with tf.name_scope('loss'):        tf.summary.histogram('Loss', loss)    summary_op = tf.summary.merge_all()    summary_writer = tf.summary.FileWriter('./log/', tf.get_default_graph())    # 初始化变量    sess.run(tf.global_variables_initializer())    # 设定训练的轮数    STEPS = 10000    for i in range(STEPS+1):        # get batch_size samples data to train,每次选取batch_size个样本进行训练        start = (i * batch_size) % dataset_size        end = min(start + batch_size, dataset_size)        # 通过选取的样本训练神经网络并更新参数        sess.run(train_step, feed_dict={x: X[start: end], y_: Y[start: end]})        if i % 500 == 0:            # 每隔一段时间计算在所有数据上的loss并输出            total_cross_entropy,summary = sess.run([loss,summary_op], feed_dict={x: X, y_: Y})            print ("After %d training steps, loss on all data is %g" % (i, total_cross_entropy))            # 在训练结束之后,输出神经网络的参数            log_writer = tf.summary.FileWriter('./log/')            log_writer.add_summary(summary, i)    print sess.run(w1)    print sess.run(w2)
输出:
After 0 training steps, loss on all data is 2599.94After 500 training steps, loss on all data is 873.661After 1000 training steps, loss on all data is 667.791After 1500 training steps, loss on all data is 483.075After 2000 training steps, loss on all data is 300.244After 2500 training steps, loss on all data is 159.576After 3000 training steps, loss on all data is 74.0152After 3500 training steps, loss on all data is 30.0223After 4000 training steps, loss on all data is 10.8486After 4500 training steps, loss on all data is 3.86847After 5000 training steps, loss on all data is 1.67753After 5500 training steps, loss on all data is 0.870904After 6000 training steps, loss on all data is 0.473931After 6500 training steps, loss on all data is 0.262818After 7000 training steps, loss on all data is 0.132299After 7500 training steps, loss on all data is 0.0585541After 8000 training steps, loss on all data is 0.022748After 8500 training steps, loss on all data is 0.00789603After 9000 training steps, loss on all data is 0.00259982After 9500 training steps, loss on all data is 0.000722203After 10000 training steps, loss on all data is 0.000218332[[-0.81131822  0.74178803 -0.06654923] [-2.4427042   1.72580242  3.50584793]][[-0.81131822] [ 1.53606057] [ 2.09628034]]

tensorboard可视化

程序3运行之后在程序所在目录下生成 log 文件夹,保存了运行过程中的中间参数;
在log文件夹同级目录下,执行tensorboard指令:
tensorboard --logdir=log
当前系统
的浏览器里输入TensorBoard返回的ip地址,这里是 "http://dcrmg:6006",就可以看到程序中记录下的参数可视化:

X是二维数组,Target的是128个分布在1.0~10.0之间的浮点数。 预测值Y在4000轮迭代之后越来越接近真实Target值。

直方图分布:

转载于:https://www.cnblogs.com/mtcnn/p/9411766.html

你可能感兴趣的文章
数论中的分块思想
查看>>
14. Longest Common Prefix
查看>>
js 类对象
查看>>
函数可变参传值(python)
查看>>
单双击响应事件处理区分
查看>>
nio通道
查看>>
ORA-12154: TNS: 无法解析指定的连接标识符
查看>>
Java IO模型
查看>>
【2018.11.23】2018WCTest(7)
查看>>
Tomcat中catalina.bat详解
查看>>
Python的hasattr() getattr() setattr() 函数使用方法详解
查看>>
Java注解简单学习
查看>>
ZooKeeper系列3:ZooKeeper命令、命令行工具及简单操作
查看>>
VMware exsi 虚拟化嵌套
查看>>
java.lang.NoClassDefFoundError: javax/servlet/jsp/jstl/core/Config
查看>>
div模态层示例
查看>>
转:ASP.NET发布WebService操作流程
查看>>
.NET短信接口 实例
查看>>
Visual Studio 常用快捷键
查看>>
【代码笔记】Web-ionic-创建APP的架构
查看>>