(windows10版)Tensorflow 实战Google深度学习框架学习笔记(七)minist数字识别

释放双眼,带上耳机,听听看~!

01 mnist数据读取


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
1# 《TensorFlow实战Google深度学习框架》05 minist数字识别问题
2# win10 Tensorflow1.0.1 python3.5.3
3# CUDA v8.0 cudnn-8.0-windows10-x64-v5.1
4# filename:ts05.01.py # mnist数据读取
5
6import tensorflow as tf
7
8# 在Yann LeCun教授的网站中(http://yann.lecun.com/exdb/mnist ) 对MNIST数据集做出了详细的介绍。
9# 1. 读取数据集,第一次TensorFlow会自动下载数据集到下面的路径中
10from tensorflow.examples.tutorials.mnist import input_data
11mnist = input_data.read_data_sets("../../datasets/MNIST_data/", one_hot=True)
12
13# 2. 数据集会自动被分成3个子集,train、validation和test。以下代码会显示数据集的大小。
14print("Training data size: ", mnist.train.num_examples)
15print("Validating data size: ", mnist.validation.num_examples)
16print("Testing data size: ", mnist.test.num_examples)
17
18# 3. 查看training数据集中某个成员的像素矩阵生成的一维数组和其属于的数字标签。
19print("Example training data: ", mnist.train.images[0])
20print("Example training data label: ", mnist.train.labels[0])
21
22# 4. 使用mnist.train.next_batch来实现随机梯度下降。
23batch_size = 100
24xs, ys = mnist.train.next_batch(batch_size)    # 从train的集合中选取batch_size个训练数据。
25print("X shape:", xs.shape)
26print("Y shape:", ys.shape)
27
28'''
29Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
30Extracting ../../datasets/MNIST_data/train-images-idx3-ubyte.gz
31Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
32Extracting ../../datasets/MNIST_data/train-labels-idx1-ubyte.gz
33Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
34Extracting ../../datasets/MNIST_data/t10k-images-idx3-ubyte.gz
35Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
36Extracting ../../datasets/MNIST_data/t10k-labels-idx1-ubyte.gz
37Training data size:  55000
38Validating data size:  5000
39Testing data size:  10000
40Example training data:  [ 0.          0.          0.          0.          0.          0.          0.
41  0.          0.          0.          0.          0.          0.          0.
42  0.          0.          0.          0.          0.          0.          0.
43  0.          0.          0.          0.          0.          0.          0.
44  0.          0.          0.          0.          0.          0.          0.
45  0.          0.          0.          0.          0.          0.          0.
46  0.          0.          0.          0.          0.          0.          0.
47  0.          0.          0.          0.          0.          0.          0.
48  0.          0.          0.          0.          0.          0.          0.
49  0.          0.          0.          0.          0.          0.          0.
50  0.          0.          0.          0.          0.          0.          0.
51  0.          0.          0.          0.          0.          0.          0.
52  0.          0.          0.          0.          0.          0.          0.
53  0.          0.          0.          0.          0.          0.          0.
54  0.          0.          0.          0.          0.          0.          0.
55  0.          0.          0.          0.          0.          0.          0.
56  0.          0.          0.          0.          0.          0.          0.
57  0.          0.          0.          0.          0.          0.          0.
58  0.          0.          0.          0.          0.          0.          0.
59  0.          0.          0.          0.          0.          0.          0.
60  0.          0.          0.          0.          0.          0.          0.
61  0.          0.          0.          0.          0.          0.          0.
62  0.          0.          0.          0.          0.          0.          0.
63  0.          0.          0.          0.          0.          0.          0.
64  0.          0.          0.          0.          0.          0.          0.
65  0.          0.          0.          0.          0.          0.          0.
66  0.          0.          0.          0.          0.          0.          0.
67  0.          0.          0.          0.          0.          0.          0.
68  0.          0.          0.          0.          0.          0.          0.
69  0.          0.          0.          0.          0.38039219  0.37647063
70  0.3019608   0.46274513  0.2392157   0.          0.          0.          0.
71  0.          0.          0.          0.          0.          0.          0.
72  0.          0.          0.          0.          0.35294119  0.5411765
73  0.92156869  0.92156869  0.92156869  0.92156869  0.92156869  0.92156869
74  0.98431379  0.98431379  0.97254908  0.99607849  0.96078438  0.92156869
75  0.74509805  0.08235294  0.          0.          0.          0.          0.
76  0.          0.          0.          0.          0.          0.
77  0.54901963  0.98431379  0.99607849  0.99607849  0.99607849  0.99607849
78  0.99607849  0.99607849  0.99607849  0.99607849  0.99607849  0.99607849
79  0.99607849  0.99607849  0.99607849  0.99607849  0.74117649  0.09019608
80  0.          0.          0.          0.          0.          0.          0.
81  0.          0.          0.          0.88627458  0.99607849  0.81568635
82  0.78039223  0.78039223  0.78039223  0.78039223  0.54509807  0.2392157
83  0.2392157   0.2392157   0.2392157   0.2392157   0.50196081  0.8705883
84  0.99607849  0.99607849  0.74117649  0.08235294  0.          0.          0.
85  0.          0.          0.          0.          0.          0.
86  0.14901961  0.32156864  0.0509804   0.          0.          0.          0.
87  0.          0.          0.          0.          0.          0.          0.
88  0.13333334  0.83529419  0.99607849  0.99607849  0.45098042  0.          0.
89  0.          0.          0.          0.          0.          0.          0.
90  0.          0.          0.          0.          0.          0.          0.
91  0.          0.          0.          0.          0.          0.          0.
92  0.          0.32941177  0.99607849  0.99607849  0.91764712  0.          0.
93  0.          0.          0.          0.          0.          0.          0.
94  0.          0.          0.          0.          0.          0.          0.
95  0.          0.          0.          0.          0.          0.          0.
96  0.          0.32941177  0.99607849  0.99607849  0.91764712  0.          0.
97  0.          0.          0.          0.          0.          0.          0.
98  0.          0.          0.          0.          0.          0.          0.
99  0.          0.          0.          0.          0.          0.          0.
100  0.41568631  0.6156863   0.99607849  0.99607849  0.95294124  0.20000002
101  0.          0.          0.          0.          0.          0.          0.
102  0.          0.          0.          0.          0.          0.          0.
103  0.          0.          0.          0.09803922  0.45882356  0.89411771
104  0.89411771  0.89411771  0.99215692  0.99607849  0.99607849  0.99607849
105  0.99607849  0.94117653  0.          0.          0.          0.          0.
106  0.          0.          0.          0.          0.          0.          0.
107  0.          0.          0.          0.26666668  0.4666667   0.86274517
108  0.99607849  0.99607849  0.99607849  0.99607849  0.99607849  0.99607849
109  0.99607849  0.99607849  0.99607849  0.55686277  0.          0.          0.
110  0.          0.          0.          0.          0.          0.          0.
111  0.          0.          0.          0.14509805  0.73333335  0.99215692
112  0.99607849  0.99607849  0.99607849  0.87450987  0.80784321  0.80784321
113  0.29411766  0.26666668  0.84313732  0.99607849  0.99607849  0.45882356
114  0.          0.          0.          0.          0.          0.          0.
115  0.          0.          0.          0.          0.          0.44313729
116  0.8588236   0.99607849  0.94901967  0.89019614  0.45098042  0.34901962
117  0.12156864  0.          0.          0.          0.          0.7843138
118  0.99607849  0.9450981   0.16078432  0.          0.          0.          0.
119  0.          0.          0.          0.          0.          0.          0.
120  0.          0.66274512  0.99607849  0.6901961   0.24313727  0.          0.
121  0.          0.          0.          0.          0.          0.18823531
122  0.90588242  0.99607849  0.91764712  0.          0.          0.          0.
123  0.          0.          0.          0.          0.          0.          0.
124  0.          0.          0.07058824  0.48627454  0.          0.          0.
125  0.          0.          0.          0.          0.          0.
126  0.32941177  0.99607849  0.99607849  0.65098041  0.          0.          0.
127  0.          0.          0.          0.          0.          0.          0.
128  0.          0.          0.          0.          0.          0.          0.
129  0.          0.          0.          0.          0.          0.          0.
130  0.54509807  0.99607849  0.9333334   0.22352943  0.          0.          0.
131  0.          0.          0.          0.          0.          0.          0.
132  0.          0.          0.          0.          0.          0.          0.
133  0.          0.          0.          0.          0.          0.
134  0.82352948  0.98039222  0.99607849  0.65882355  0.          0.          0.
135  0.          0.          0.          0.          0.          0.          0.
136  0.          0.          0.          0.          0.          0.          0.
137  0.          0.          0.          0.          0.          0.          0.
138  0.94901967  0.99607849  0.93725497  0.22352943  0.          0.          0.
139  0.          0.          0.          0.          0.          0.          0.
140  0.          0.          0.          0.          0.          0.          0.
141  0.          0.          0.          0.          0.          0.
142  0.34901962  0.98431379  0.9450981   0.33725491  0.          0.          0.
143  0.          0.          0.          0.          0.          0.          0.
144  0.          0.          0.          0.          0.          0.          0.
145  0.          0.          0.          0.          0.          0.
146  0.01960784  0.80784321  0.96470594  0.6156863   0.          0.          0.
147  0.          0.          0.          0.          0.          0.          0.
148  0.          0.          0.          0.          0.          0.          0.
149  0.          0.          0.          0.          0.          0.          0.
150  0.01568628  0.45882356  0.27058825  0.          0.          0.          0.
151  0.          0.          0.          0.          0.          0.          0.
152  0.          0.          0.          0.          0.          0.          0.
153  0.          0.          0.          0.          0.          0.          0.
154  0.          0.          0.          0.          0.          0.          0.
155  0.          0.          0.          0.          0.          0.          0.        ]
156Example training data label:  [ 0.  0.  0.  0.  0.  0.  0.  1.  0.  0.]
157X shape: (100, 784)
158Y shape: (100, 10)
159'''
160

02 全模型


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
1# 《TensorFlow实战Google深度学习框架》05 minist数字识别问题
2# win10 Tensorflow1.0.1 python3.5.3
3# CUDA v8.0 cudnn-8.0-windows10-x64-v5.1
4# filename:ts05.02.py # TensorFlow训练神经网络--全模型
5
6import tensorflow as tf
7from tensorflow.examples.tutorials.mnist import input_data
8
9# 1.设置输入和输出节点的个数,配置神经网络的参数
10INPUT_NODE = 784  # 输入节点
11OUTPUT_NODE = 10  # 输出节点
12LAYER1_NODE = 500  # 隐藏层数
13
14BATCH_SIZE = 100  # 每次batch打包的样本个数
15
16# 模型相关的参数
17LEARNING_RATE_BASE = 0.8
18LEARNING_RATE_DECAY = 0.99
19REGULARAZTION_RATE = 0.0001
20TRAINING_STEPS = 5000
21MOVING_AVERAGE_DECAY = 0.99
22
23# 2. 定义辅助函数来计算前向传播结果,使用ReLU做为激活函数。
24def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
25    # 不使用滑动平均类
26    if avg_class == None:
27        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
28        return tf.matmul(layer1, weights2) + biases2
29    else:
30        # 使用滑动平均类
31        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
32        return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)
33
34# 3. 定义训练过程
35def train(mnist):
36    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
37    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')
38    # 生成隐藏层的参数。
39    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1))
40    biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE]))
41    # 生成输出层的参数。
42    weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1))
43    biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))
44
45    # 计算不含滑动平均类的前向传播结果
46    y = inference(x, None, weights1, biases1, weights2, biases2)
47
48    # 定义训练轮数及相关的滑动平均类
49    global_step = tf.Variable(0, trainable=False)
50    variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
51    variables_averages_op = variable_averages.apply(tf.trainable_variables())
52    average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2)
53
54    # 计算交叉熵及其平均值
55    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
56    cross_entropy_mean = tf.reduce_mean(cross_entropy)
57
58    # 损失函数的计算
59    regularizer = tf.contrib.layers.l2_regularizer(REGULARAZTION_RATE)
60    regularaztion = regularizer(weights1) + regularizer(weights2)
61    loss = cross_entropy_mean + regularaztion
62
63    # 设置指数衰减的学习率。
64    learning_rate = tf.train.exponential_decay(
65        LEARNING_RATE_BASE,
66        global_step,
67        mnist.train.num_examples / BATCH_SIZE,
68        LEARNING_RATE_DECAY,
69        staircase=True)
70
71    # 优化损失函数
72    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
73
74    # 反向传播更新参数和更新每一个参数的滑动平均值
75    with tf.control_dependencies([train_step, variables_averages_op]):
76        train_op = tf.no_op(name='train')
77
78    # 计算正确率
79    correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_, 1))
80    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
81
82    # 初始化回话并开始训练过程。
83    with tf.Session() as sess:
84        tf.global_variables_initializer().run()
85        validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
86        test_feed = {x: mnist.test.images, y_: mnist.test.labels}
87
88        # 循环的训练神经网络。
89        for i in range(TRAINING_STEPS):
90            if i % 1000 == 0:
91                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
92                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))
93
94            xs, ys = mnist.train.next_batch(BATCH_SIZE)
95            sess.run(train_op, feed_dict={x: xs, y_: ys})
96
97        test_acc = sess.run(accuracy, feed_dict=test_feed)
98        print(("After %d training step(s), test accuracy using average model is %g" % (TRAINING_STEPS, test_acc)))
99
100
101# 4. 主程序入口,这里设定模型训练次数为5000次。
102def main(argv=None):
103    mnist = input_data.read_data_sets("../../../datasets/MNIST_data", one_hot=True)
104    train(mnist)
105
106if __name__=='__main__':
107    main()
108
109'''
110Extracting ../../../datasets/MNIST_data\train-images-idx3-ubyte.gz
111Extracting ../../../datasets/MNIST_data\train-labels-idx1-ubyte.gz
112Extracting ../../../datasets/MNIST_data\t10k-images-idx3-ubyte.gz
113Extracting ../../../datasets/MNIST_data\t10k-labels-idx1-ubyte.gz
114After 0 training step(s), validation accuracy using average model is 0.1284
115After 1000 training step(s), validation accuracy using average model is 0.9764
116After 2000 training step(s), validation accuracy using average model is 0.9806
117After 3000 training step(s), validation accuracy using average model is 0.9818
118After 4000 training step(s), validation accuracy using average model is 0.9822
119After 5000 training step(s), test accuracy using average model is 0.9822
120'''
121

03 不使用正则化


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
1# 《TensorFlow实战Google深度学习框架》05 minist数字识别问题
2# win10 Tensorflow1.0.1 python3.5.3
3# CUDA v8.0 cudnn-8.0-windows10-x64-v5.1
4# filename:ts05.03.py # TensorFlow训练神经网络--不使用正则化
5
6import tensorflow as tf
7from tensorflow.examples.tutorials.mnist import input_data
8
9# 1.设置输入和输出节点的个数,配置神经网络的参数。
10INPUT_NODE = 784  # 输入节点
11OUTPUT_NODE = 10  # 输出节点
12LAYER1_NODE = 500  # 隐藏层数
13
14BATCH_SIZE = 100  # 每次batch打包的样本个数
15
16# 模型相关的参数
17LEARNING_RATE_BASE = 0.8
18LEARNING_RATE_DECAY = 0.99
19
20TRAINING_STEPS = 5000
21MOVING_AVERAGE_DECAY = 0.99
22
23# 2. 定义辅助函数来计算前向传播结果,使用ReLU做为激活函数
24def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
25    # 不使用滑动平均类
26    if avg_class == None:
27        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
28        return tf.matmul(layer1, weights2) + biases2
29    else:
30        # 使用滑动平均类
31        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
32        return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)
33
34# 3. 定义训练过程。
35def train(mnist):
36    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
37    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')
38    # 生成隐藏层的参数。
39    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1))
40    biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE]))
41    # 生成输出层的参数。
42    weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1))
43    biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))
44
45    # 计算不含滑动平均类的前向传播结果
46    y = inference(x, None, weights1, biases1, weights2, biases2)
47
48    # 定义训练轮数及相关的滑动平均类
49    global_step = tf.Variable(0, trainable=False)
50    variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
51    variables_averages_op = variable_averages.apply(tf.trainable_variables())
52    average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2)
53
54    # 计算交叉熵及其平均值
55    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
56    cross_entropy_mean = tf.reduce_mean(cross_entropy)
57
58    # 损失函数的计算
59    loss = cross_entropy_mean
60
61    # 设置指数衰减的学习率。
62    learning_rate = tf.train.exponential_decay(
63        LEARNING_RATE_BASE,
64        global_step,
65        mnist.train.num_examples / BATCH_SIZE,
66        LEARNING_RATE_DECAY,
67        staircase=True)
68
69    # 优化损失函数
70    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
71
72    # 反向传播更新参数和更新每一个参数的滑动平均值
73    with tf.control_dependencies([train_step, variables_averages_op]):
74        train_op = tf.no_op(name='train')
75
76    # 计算正确率
77    correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_, 1))
78    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
79
80    # 初始化回话并开始训练过程。
81    with tf.Session() as sess:
82        tf.global_variables_initializer().run()
83        validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
84        test_feed = {x: mnist.test.images, y_: mnist.test.labels}
85
86        # 循环的训练神经网络。
87        for i in range(TRAINING_STEPS):
88            if i % 1000 == 0:
89                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
90                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))
91
92            xs, ys = mnist.train.next_batch(BATCH_SIZE)
93            sess.run(train_op, feed_dict={x: xs, y_: ys})
94
95        test_acc = sess.run(accuracy, feed_dict=test_feed)
96        print(("After %d training step(s), test accuracy using average model is %g" % (TRAINING_STEPS, test_acc)))
97
98# 4. 主程序入口,这里设定模型训练次数为5000次。
99def main(argv=None):
100    mnist = input_data.read_data_sets("../../../datasets/MNIST_data", one_hot=True)
101    train(mnist)
102
103if __name__=='__main__':
104    main()
105
106'''
107Extracting ../../../datasets/MNIST_data\train-images-idx3-ubyte.gz
108Extracting ../../../datasets/MNIST_data\train-labels-idx1-ubyte.gz
109Extracting ../../../datasets/MNIST_data\t10k-images-idx3-ubyte.gz
110Extracting ../../../datasets/MNIST_data\t10k-labels-idx1-ubyte.gz
111After 0 training step(s), validation accuracy using average model is 0.1138
112After 1000 training step(s), validation accuracy using average model is 0.9754
113After 2000 training step(s), validation accuracy using average model is 0.979
114After 3000 training step(s), validation accuracy using average model is 0.982
115After 4000 training step(s), validation accuracy using average model is 0.9826
116After 5000 training step(s), test accuracy using average model is 0.9817
117'''
118

04 不使用指数衰减的学习率


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
1# 《TensorFlow实战Google深度学习框架》05 minist数字识别问题
2# win10 Tensorflow1.0.1 python3.5.3
3# CUDA v8.0 cudnn-8.0-windows10-x64-v5.1
4# filename:ts05.04.py # TensorFlow训练神经网络--不使用指数衰减的学习率
5
6import tensorflow as tf
7from tensorflow.examples.tutorials.mnist import input_data
8
9# 1.设置输入和输出节点的个数,配置神经网络的参数
10INPUT_NODE = 784  # 输入节点
11OUTPUT_NODE = 10  # 输出节点
12LAYER1_NODE = 500  # 隐藏层数
13
14BATCH_SIZE = 100  # 每次batch打包的样本个数
15
16# 模型相关的参数
17LEARNING_RATE = 0.1
18REGULARAZTION_RATE = 0.0001
19TRAINING_STEPS = 5000
20MOVING_AVERAGE_DECAY = 0.99
21
22# 2. 定义辅助函数来计算前向传播结果,使用ReLU做为激活函数
23def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
24    # 不使用滑动平均类
25    if avg_class == None:
26        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
27        return tf.matmul(layer1, weights2) + biases2
28    else:
29        # 使用滑动平均类
30        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
31        return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)
32# 3. 定义训练过程。
33def train(mnist):
34    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
35    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')
36    # 生成隐藏层的参数。
37    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1))
38    biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE]))
39    # 生成输出层的参数。
40    weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1))
41    biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))
42
43    # 计算不含滑动平均类的前向传播结果
44    y = inference(x, None, weights1, biases1, weights2, biases2)
45
46    # 定义训练轮数及相关的滑动平均类
47    global_step = tf.Variable(0, trainable=False)
48    variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
49    variables_averages_op = variable_averages.apply(tf.trainable_variables())
50    average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2)
51
52    # 计算交叉熵及其平均值
53    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
54    cross_entropy_mean = tf.reduce_mean(cross_entropy)
55
56    # 损失函数的计算
57    regularizer = tf.contrib.layers.l2_regularizer(REGULARAZTION_RATE)
58    regularaztion = regularizer(weights1) + regularizer(weights2)
59    loss = cross_entropy_mean + regularaztion
60
61    # 优化损失函数
62    train_step = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(loss, global_step=global_step)
63
64    # 反向传播更新参数和更新每一个参数的滑动平均值
65    with tf.control_dependencies([train_step, variables_averages_op]):
66        train_op = tf.no_op(name='train')
67
68    # 计算正确率
69    correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_, 1))
70    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
71
72    # 初始化回话并开始训练过程。
73    with tf.Session() as sess:
74        tf.global_variables_initializer().run()
75        validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
76        test_feed = {x: mnist.test.images, y_: mnist.test.labels}
77
78        # 循环的训练神经网络。
79        for i in range(TRAINING_STEPS):
80            if i % 1000 == 0:
81                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
82                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))
83
84            xs, ys = mnist.train.next_batch(BATCH_SIZE)
85            sess.run(train_op, feed_dict={x: xs, y_: ys})
86
87        test_acc = sess.run(accuracy, feed_dict=test_feed)
88        print(("After %d training step(s), test accuracy using average model is %g" % (TRAINING_STEPS, test_acc)))
89
90# 4. 主程序入口,这里设定模型训练次数为5000次。
91def main(argv=None):
92    mnist = input_data.read_data_sets("../../../datasets/MNIST_data", one_hot=True)
93    train(mnist)
94
95if __name__=='__main__':
96    main()
97'''
98Extracting ../../../datasets/MNIST_data\train-images-idx3-ubyte.gz
99Extracting ../../../datasets/MNIST_data\train-labels-idx1-ubyte.gz
100Extracting ../../../datasets/MNIST_data\t10k-images-idx3-ubyte.gz
101Extracting ../../../datasets/MNIST_data\t10k-labels-idx1-ubyte.gz
102After 0 training step(s), validation accuracy using average model is 0.1076
103After 1000 training step(s), validation accuracy using average model is 0.9462
104After 2000 training step(s), validation accuracy using average model is 0.9636
105After 3000 training step(s), validation accuracy using average model is 0.969
106After 4000 training step(s), validation accuracy using average model is 0.9716
107After 5000 training step(s), test accuracy using average model is 0.973
108'''
109

05 不使用激活函数


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
1# 《TensorFlow实战Google深度学习框架》05 minist数字识别问题
2# win10 Tensorflow1.0.1 python3.5.3
3# CUDA v8.0 cudnn-8.0-windows10-x64-v5.1
4# filename:ts05.05.py # TensorFlow训练神经网络--不使用激活函数
5
6import tensorflow as tf
7from tensorflow.examples.tutorials.mnist import input_data
8
9# 1.设置输入和输出节点的个数,配置神经网络的参数
10INPUT_NODE = 784  # 输入节点
11OUTPUT_NODE = 10  # 输出节点
12LAYER1_NODE = 500  # 隐藏层数
13
14BATCH_SIZE = 100  # 每次batch打包的样本个数
15
16# 模型相关的参数
17LEARNING_RATE_BASE = 0.8
18LEARNING_RATE_DECAY = 0.99
19REGULARAZTION_RATE = 0.0001
20TRAINING_STEPS = 5000
21MOVING_AVERAGE_DECAY = 0.99
22
23# 2. 定义辅助函数来计算前向传播结果,使用ReLU做为激活函数
24def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
25    # 不使用滑动平均类
26    if avg_class == None:
27        layer1 = tf.matmul(input_tensor, weights1) + biases1
28        return tf.matmul(layer1, weights2) + biases2
29    else:
30        # 使用滑动平均类
31        layer1 = tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1)
32        return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)
33
34# 3. 定义训练过程
35def train(mnist):
36    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
37    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')
38    # 生成隐藏层的参数。
39    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1))
40    biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE]))
41    # 生成输出层的参数。
42    weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1))
43    biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))
44
45    # 计算不含滑动平均类的前向传播结果
46    y = inference(x, None, weights1, biases1, weights2, biases2)
47
48    # 定义训练轮数及相关的滑动平均类
49    global_step = tf.Variable(0, trainable=False)
50    variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
51    variables_averages_op = variable_averages.apply(tf.trainable_variables())
52    average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2)
53
54    # 计算交叉熵及其平均值
55    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
56    cross_entropy_mean = tf.reduce_mean(cross_entropy)
57
58    # 损失函数的计算
59    regularizer = tf.contrib.layers.l2_regularizer(REGULARAZTION_RATE)
60    regularaztion = regularizer(weights1) + regularizer(weights2)
61    loss = cross_entropy_mean + regularaztion
62
63    # 设置指数衰减的学习率。
64    learning_rate = tf.train.exponential_decay(
65        LEARNING_RATE_BASE,
66        global_step,
67        mnist.train.num_examples / BATCH_SIZE,
68        LEARNING_RATE_DECAY,
69        staircase=True)
70
71    # 优化损失函数
72    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
73
74    # 反向传播更新参数和更新每一个参数的滑动平均值
75    with tf.control_dependencies([train_step, variables_averages_op]):
76        train_op = tf.no_op(name='train')
77
78    # 计算正确率
79    correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_, 1))
80    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
81
82    # 初始化回话并开始训练过程。
83    with tf.Session() as sess:
84        tf.global_variables_initializer().run()
85        validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
86        test_feed = {x: mnist.test.images, y_: mnist.test.labels}
87
88        # 循环的训练神经网络。
89        for i in range(TRAINING_STEPS):
90            if i % 1000 == 0:
91                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
92                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))
93
94            xs,ys=mnist.train.next_batch(BATCH_SIZE)
95            sess.run(train_op,feed_dict={x:xs,y_:ys})
96
97        test_acc=sess.run(accuracy,feed_dict=test_feed)
98        print(("After %d training step(s), test accuracy using average model is %g" %(TRAINING_STEPS, test_acc)))
99
100# 4. 主程序入口,这里设定模型训练次数为5000次
101def main(argv=None):
102    mnist = input_data.read_data_sets("../../../datasets/MNIST_data", one_hot=True)
103    train(mnist)
104
105if __name__=='__main__':
106    main()
107
108'''
109After 0 training step(s), validation accuracy using average model is 0.08
110After 1000 training step(s), validation accuracy using average model is 0.0958
111After 2000 training step(s), validation accuracy using average model is 0.0958
112After 3000 training step(s), validation accuracy using average model is 0.0958
113After 4000 training step(s), validation accuracy using average model is 0.0958
114After 5000 training step(s), test accuracy using average model is 0.098
115'''
116

06 不使用隐藏层


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
1# 《TensorFlow实战Google深度学习框架》05 minist数字识别问题
2# win10 Tensorflow1.0.1 python3.5.3
3# CUDA v8.0 cudnn-8.0-windows10-x64-v5.1
4# filename:ts05.06.py # TensorFlow训练神经网络--不使用隐藏层
5
6import tensorflow as tf
7from tensorflow.examples.tutorials.mnist import input_data
8
9# 1.设置输入和输出节点的个数,配置神经网络的参数
10INPUT_NODE = 784     # 输入节点
11OUTPUT_NODE = 10     # 输出节点
12
13BATCH_SIZE = 100     # 每次batch打包的样本个数
14
15# 模型相关的参数
16LEARNING_RATE_BASE = 0.8
17LEARNING_RATE_DECAY = 0.99
18REGULARAZTION_RATE = 0.0001
19TRAINING_STEPS = 5000
20MOVING_AVERAGE_DECAY = 0.99
21
22# 2. 定义辅助函数来计算前向传播结果,使用ReLU做为激活函数
23def inference(input_tensor, avg_class, weights1, biases1):
24    # 不使用滑动平均类
25    if avg_class == None:
26        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
27        return layer1
28    else:
29        # 使用滑动平均类
30        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
31        return layer1
32
33# 3. 定义训练过程
34def train(mnist):
35    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
36    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')
37
38    # 生成输出层的参数。
39    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, OUTPUT_NODE], stddev=0.1))
40    biases1 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))
41
42    # 计算不含滑动平均类的前向传播结果
43    y = inference(x, None, weights1, biases1)
44
45    # 定义训练轮数及相关的滑动平均类
46    global_step = tf.Variable(0, trainable=False)
47    variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
48    variables_averages_op = variable_averages.apply(tf.trainable_variables())
49    average_y = inference(x, variable_averages, weights1, biases1)
50
51    # 计算交叉熵及其平均值
52    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
53    cross_entropy_mean = tf.reduce_mean(cross_entropy)
54
55    # 损失函数的计算
56    regularizer = tf.contrib.layers.l2_regularizer(REGULARAZTION_RATE)
57    regularaztion = regularizer(weights1)
58    loss = cross_entropy_mean + regularaztion
59
60    # 设置指数衰减的学习率。
61    learning_rate = tf.train.exponential_decay(
62        LEARNING_RATE_BASE,
63        global_step,
64        mnist.train.num_examples / BATCH_SIZE,
65        LEARNING_RATE_DECAY,
66        staircase=True)
67
68    # 优化损失函数
69    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
70
71    # 反向传播更新参数和更新每一个参数的滑动平均值
72    with tf.control_dependencies([train_step, variables_averages_op]):
73        train_op = tf.no_op(name='train')
74
75    # 计算正确率
76    correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_, 1))
77    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
78
79    # 初始化回话并开始训练过程。
80    with tf.Session() as sess:
81        tf.global_variables_initializer().run()
82        validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
83        test_feed = {x: mnist.test.images, y_: mnist.test.labels}
84
85        # 循环的训练神经网络。
86        for i in range(TRAINING_STEPS):
87            if i % 1000 == 0:
88                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
89                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))
90
91            xs, ys = mnist.train.next_batch(BATCH_SIZE)
92            sess.run(train_op, feed_dict={x: xs, y_: ys})
93
94        test_acc = sess.run(accuracy, feed_dict=test_feed)
95        print(("After %d training step(s), test accuracy using average model is %g" % (TRAINING_STEPS, test_acc)))
96
97# 4. 主程序入口,这里设定模型训练次数为5000次
98def main(argv=None):
99    mnist = input_data.read_data_sets("../../../datasets/MNIST_data", one_hot=True)
100    train(mnist)
101
102if __name__=='__main__':
103    main()
104
105'''
106After 0 training step(s), validation accuracy using average model is 0.1166
107After 1000 training step(s), validation accuracy using average model is 0.6498
108After 2000 training step(s), validation accuracy using average model is 0.7536
109After 3000 training step(s), validation accuracy using average model is 0.7546
110After 4000 training step(s), validation accuracy using average model is 0.7552
111After 5000 training step(s), test accuracy using average model is 0.7501
112'''
113

07 不使用滑动平均


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
1# 《TensorFlow实战Google深度学习框架》05 minist数字识别问题
2# win10 Tensorflow1.0.1 python3.5.3
3# CUDA v8.0 cudnn-8.0-windows10-x64-v5.1
4# filename:ts05.07.py # TensorFlow训练神经网络--不使用滑动平均
5
6import tensorflow as tf
7from tensorflow.examples.tutorials.mnist import input_data
8
9# 1.设置输入和输出节点的个数,配置神经网络的参数
10INPUT_NODE = 784  # 输入节点
11OUTPUT_NODE = 10  # 输出节点
12LAYER1_NODE = 500  # 隐藏层数
13
14BATCH_SIZE = 100  # 每次batch打包的样本个数
15
16# 模型相关的参数
17LEARNING_RATE_BASE = 0.8
18LEARNING_RATE_DECAY = 0.99
19REGULARAZTION_RATE = 0.0001
20TRAINING_STEPS = 5000
21
22# 2. 定义辅助函数来计算前向传播结果,使用ReLU做为激活函数
23def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
24    # 不使用滑动平均类
25    if avg_class == None:
26        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
27        return tf.matmul(layer1, weights2) + biases2
28    else:
29        # 使用滑动平均类
30        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
31        return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)
32
33# 3. 定义训练过程
34def train(mnist):
35    x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input')
36    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')
37    # 生成隐藏层的参数。
38    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1))
39    biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE]))
40    # 生成输出层的参数。
41    weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1))
42    biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE]))
43
44    # 计算不含滑动平均类的前向传播结果
45    y = inference(x, None, weights1, biases1, weights2, biases2)
46
47    # 定义训练轮数及相关的滑动平均类
48    global_step = tf.Variable(0, trainable=False)
49
50    # 计算交叉熵及其平均值
51    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
52    cross_entropy_mean = tf.reduce_mean(cross_entropy)
53
54    # 损失函数的计算
55    regularizer = tf.contrib.layers.l2_regularizer(REGULARAZTION_RATE)
56    regularaztion = regularizer(weights1) + regularizer(weights2)
57    loss = cross_entropy_mean + regularaztion
58
59    # 设置指数衰减的学习率。
60    learning_rate = tf.train.exponential_decay(
61        LEARNING_RATE_BASE,
62        global_step,
63        mnist.train.num_examples / BATCH_SIZE,
64        LEARNING_RATE_DECAY,
65        staircase=True)
66
67    # 优化损失函数
68    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
69
70    # 反向传播更新参数
71    with tf.control_dependencies([train_step]):
72        train_op = tf.no_op(name='train')
73
74    # 计算正确率
75    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
76    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
77
78    # 初始化回话并开始训练过程。
79    with tf.Session() as sess:
80        tf.global_variables_initializer().run()
81        validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels}
82        test_feed = {x: mnist.test.images, y_: mnist.test.labels}
83
84        # 循环的训练神经网络。
85        for i in range(TRAINING_STEPS):
86            if i % 1000 == 0:
87                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
88                print("After %d training step(s), validation accuracy using average model is %g " % (i, validate_acc))
89
90            xs, ys = mnist.train.next_batch(BATCH_SIZE)
91            sess.run(train_op, feed_dict={x: xs, y_: ys})
92
93        test_acc = sess.run(accuracy, feed_dict=test_feed)
94        print(("After %d training step(s), test accuracy using average model is %g" % (TRAINING_STEPS, test_acc)))
95
96# 4. 主程序入口,这里设定模型训练次数为5000次
97def main(argv=None):
98    mnist = input_data.read_data_sets("../../../datasets/MNIST_data", one_hot=True)
99    train(mnist)
100
101if __name__=='__main__':
102    main()
103
104'''
105After 0 training step(s), validation accuracy using average model is 0.0978
106After 1000 training step(s), validation accuracy using average model is 0.9726
107After 2000 training step(s), validation accuracy using average model is 0.9808
108After 3000 training step(s), validation accuracy using average model is 0.9816
109After 4000 training step(s), validation accuracy using average model is 0.9818
110After 5000 training step(s), test accuracy using average model is 0.9832
111'''
112

给TA打赏
共{{data.count}}人
人已打赏
安全运维

MySQL到MongoDB的数据同步方法!

2021-12-11 11:36:11

安全运维

Ubuntu上NFS的安装配置

2021-12-19 17:36:11

个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索