0


入门机器学习(西瓜书+南瓜书)神经网络总结(python代码实现)

入门机器学习(西瓜书+南瓜书)神经网络总结(python代码实现)

一、神经网络

1.1 通俗理解

这次的内容较难理解,因此,笔者尽量通过通俗的话来说说清楚什么是神经网络?他是怎么来的?为什么近几年会如此之热?在此基础之上的深度学习,有是怎么回事?下面让我们一步步来揭开神经网络的神秘面纱。
神经网络模型其实最早起源于我们熟知的线性模型,对,你没有听错。就是简单的

  1. y
  2. =
  3. k
  4. x
  5. +
  6. b
  7. y=kx+b
  8. y=kx+b,那时,他还叫做感知机。就是从神经元的构造中获得启发,直白说就是比较简单的线性连接与输出。如下图为神经元示意图,树突其作用是接受其他神经元轴突传来的冲动并传给细胞体。轴突其作用是接受外来刺激,再由细胞体传出。神经细胞可以视为有两种状态的机器,激活时为“是”,不激活时为“否”。神经细胞的状态取决于从其他神经细胞接收到的信号量,以及突触的性质(抑制或加强)。当信号量超过某个阈值( Threshold )时,细胞体就会被激活,产生电脉冲。电脉冲沿着轴突并通过突触传递到其它神经元。

在这里插入图片描述
生物不好的同学这些看不懂上面的说法也没有关系,科学家们把该结构简化,照猫画虎,根据神经元的结构构造了下图所示的感知机模型。这也是神经网络早期的雏形。公式表达也就是线性关系的组合,它是一个最简单的单层神经网络,包括输入、权重和输出。这个神经元

  1. a
  2. 1
  3. ,
  4. a
  5. 2
  6. ,
  7. a
  8. 3
  9. a1,a2,a3
  10. a1,a2,a3是输入,
  11. w
  12. 1
  13. ,
  14. w
  15. 2
  16. ,
  17. w
  18. 3
  19. w1,w2,w3
  20. w1,w2,w3是权重,输入节点后,经过激活函数,得到输出
  21. z
  22. z
  23. z
  24. z
  25. =
  26. σ
  27. (
  28. i
  29. =
  30. 1
  31. n
  32. a
  33. i
  34. w
  35. i
  36. μ
  37. )
  38. z=\sigma(\sum_{i=1}^{n}a_{i}w_{i}-\mu)
  39. z=σ(∑i=1naiwi​−μ).(其中μ为阈值(Threshold),也叫偏置(Bias))。(通俗来说就是要求的值z用箭未的输入值a乘以箭头中的权值全部相加减去阈值μ),可以看到这里有一个
  40. s
  41. i
  42. g
  43. m
  44. a
  45. sigma
  46. sigma,它的作用对应阈值
  47. μ
  48. \mu
  49. μ,就是说当z大于这个
  50. μ
  51. \mu
  52. μ,神经元被激活,
  53. z
  54. =
  55. 1
  56. z=1
  57. z=1,否则,神经元被抑制,
  58. z
  59. =
  60. 0
  61. z=0
  62. z=0

在这里插入图片描述
这就是最为原始的神经网络,也就是感知机(perceptron),早在1957年,由罗森布拉特发明,后期的神经网络与支持向量机都是以此为基础进行的,而感知机的基本原理就是逐点修正,首先在平面上随意取一条分类线,统计分类错误的点;然后随机对某个错误点就行修正,即变换直线的位置,使该错误点得以正确分类;接着再随机选择一个错误点进行纠正,分类线不断变化,直到所有的点都完全分类正确了,就得到了最佳的分类线。
但是,到了1969年,马文明斯基提出了针对感知机难以解决的“异或”问题,和感知机数据线性不可分的情形导致感知机进入10年的冷静期。
到了1974年,Werbos首次提出把BP算法应用到神经网络,也就是多层感知机(Multilayer Perception,MLP)也叫做人工神经网络(Artificial Neural Network,ANN)。是一个带有单隐层的神经网络。而随着神经网络的隐藏层出现,以及对激活函数的改进,多层感知机成功的解决了异或问题。
再到后来随着自动链式求导理论的发展,神经网络之父,Hinton成功的把使用BP(Back Propagation)算法来训练神经网络。
为了简化神经网络以减少神经网络中需要确定的大量参数,在1995年,YannLeCun(杨立坤)提出了著名的卷积神经网络(Conventional Neural Network,CNN),神经网络实现了局部连接,权重共享。
在后来,到2006年,Hinton和他的学生在《Nature》上发表了一篇文章,提出深度置信网络,正式的开启了深度学习的新纪元。在往后,随着CNN,RNN,ResNet的相继问世,深度学习开始被世人所接受和认可,深度学习也成为了21世纪最热门的领域之一。尤其是当AlphaGo打败世界围棋冠军李世石时,深度学习收到了世界的广泛关注。而随之而来的是人工智能时代。而在2019年,计算机界的诺贝尔奖——图灵奖,授予了深度学习的三位创始者Hinton,YanLeCuu,和Yoshua Bengio。也代表着深度学习真正的被学界所认可。而关于目前神经网络的研究,还仅仅停留在黑盒的阶段,也就是说神经网络,就像一个黑盒子,你输入数据,他会反馈出结果,而其中的原因与过程,我们一律不得而知。这也是目前神经网络的争议所在。希望后来的科学家可以打开这一神秘的潘多拉宝盒。
而对于相对简单的神经网络,我们这里再简单的通俗的说下,便于大家对神经网络有一个更深刻的理解。首先看下图,简单了解以下,以下从下到上依次是输入层(输入数据),隐藏层(包含数据信息),输出层(输出结果),其中的奥妙也在隐藏层。通俗理解,我们可以把这个网络看作是一个公司,最上面的是输出层是公司的高层,隐藏式是公司的各个经理,而输入层就是我们的员工,首先,员工负责收集数据上报给经理,而各个经理由于他的职务范围与喜好不同,会更偏好于不同的输入层,比如最左侧的隐藏层更偏好输入层左侧第一个员工和第三个员工的输入,而对第二个和员工第四个员工输入不太敏感,这样当第一个员工输入和第三员工输入的数据到时,隐藏层的经理立马心领神会,并且发掘到其中包含的信息,然后告知上级也就是输出层的高层。而输出层的第一份高层也对第一个隐藏层的经理和第四个隐藏层的经理敏感,第一个高层根据隐藏层第一个经理和第四个经理提供的信息,进而进行决策。这代表了输出。但是,有一点需要注意,因为神经网络上是全连接的,因此那第一个隐藏层的经理来说,他其实和所有的员工都有联系,只是在四个员工中对一个员工和第三个员工更加敏感,或者说关系跟高。这样,久而久之他们的关系越来越好,相互之间十分的敏感,这样基本上就可以粗略的理解了神经网络全连接的隐藏层采集数据信息和输出层根据数据信息决策的秘密了。

在这里插入图片描述

1.2 理论分析

1.2.1 BP神经网络的基本结构

最基本的神经网络包括输入层,输出层,隐藏层。
在这里插入图片描述

1.2.2 BP神经网络的重点——反向传播

反向传播的核心就是高等数学中得链式法则求导。反向传播主要有两步:
(1) 计算总误差

  1. E
  2. t
  3. o
  4. t
  5. a
  6. l
  7. =
  8. 1
  9. 2
  10. (
  11. t
  12. a
  13. r
  14. g
  15. e
  16. t
  17. o
  18. u
  19. t
  20. p
  21. u
  22. t
  23. )
  24. 2
  25. E_{total}=\sum \frac{1}{2}(target-output)^2
  26. Etotal​=∑21​(targetoutput)2

(2) 隐含层到输出层的权值更新(利用整体误差对某个权值求偏导运用链式法则)
具体计算如图所示,以

  1. w
  2. 5
  3. w5
  4. w5为例,先计算
  5. E
  6. t
  7. o
  8. t
  9. a
  10. l
  11. w
  12. 5
  13. \frac{\partial{E_{total}}}{\partial{w_{5}}}
  14. w5​∂Etotal​​ ,通过梯度下降(GD)来更新w5.

如果神经网络的层数很多时,会导致需要非常的多的导数值,而神经网络的误差传播算法就是将复杂的导数计算替换为数列的递推关系。更确切的说,误差是从后往前一层一层计算与更新的权重。
在这里插入图片描述

1.2.3 BP神经网络主要过程

神经网络主要过程概述
BP神经网络是一种非线性多层前向反馈网络,也就是多了一个反向传播的过程。基本思路就是,模型每进行一次前向传播之后,计算输出层与目标函数之间的误差,再将结果代入激活函数的导数计算之后,返回给离输出层最近的隐层,再计算当前隐层与上一层之间的误差,然后逐渐往回传播,直到第一个隐层为止。进行一次反向传播之后,还需要对权重参数进行更新。
神经网络的具体操作
(1) 初始化参数
(2) 构造损失函数Loss函数(①交叉熵②平方法)
(3) 正向传导→(

  1. y
  2. ^
  3. \hat y
  4. y^​)

(4) 结束条件:(①

  1. y
  2. ^
  3. y
  4. 2
  5. <
  6. ϵ
  7. |\hat y - y|^2<\epsilon
  8. y^​−y2 ②迭代次数)

(5) 根据梯度下降(GD)来←反向传播(目的是更新权重)

二、代码实现

这里设置一个案例,我们采用传统的代码实现。
假设,你有这样一个网络层: 第一层是输入层,包含两个神经元 i1,i2,和截距项 b1;第二层是隐含层, 包含两个神经元 h1,h2 和截距项 b2,第三层是输出 o1,o2,每条线上标的 wi 是层与层之间连接的权重,激活函数我们默认为 sigmoid 函数。现在对他们赋上初值,如下图:
目标:给出输入数据 i1,i2(0.05 和 0.10),使输出尽可能与原始输出 o1,o2(0.01 和 0.99)接近。
在这里插入图片描述

  1. # !/usr/bin/env python# @Time:2022/3/26 15:54# @Author:华阳# @File:ANN.py# @Software:PyCharm# 参数解释:# "pd_" :偏导的前缀# "d_" :导数的前缀# "w_ho" :隐含层到输出层的权重系数索引# "w_ih" :输入层到隐含层的权重系数的索引import math
  2. import matplotlib.pyplot as plt
  3. classNeuralNetwork:
  4. LEARNING_RATE =0.5def__init__(self, num_inputs, num_hidden, num_outputs, hidden_layer_weights=None, hidden_layer_bias=None,
  5. output_layer_weights=None, output_layer_bias=None):
  6. self.num_inputs = num_inputs
  7. self.hidden_layer = NeuronLayer(num_hidden, hidden_layer_bias)
  8. self.output_layer = NeuronLayer(num_outputs, output_layer_bias)
  9. self.init_weights_from_inputs_to_hidden_layer_neurons(hidden_layer_weights)
  10. self.init_weights_from_hidden_layer_neurons_to_output_layer_neurons(output_layer_weights)definit_weights_from_inputs_to_hidden_layer_neurons(self, hidden_layer_weights):
  11. weight_num =0for h inrange(len(self.hidden_layer.neurons)):for i inrange(self.num_inputs):ifnot hidden_layer_weights:
  12. self.hidden_layer.neurons[h].weights.append(random.random())else:
  13. self.hidden_layer.neurons[h].weights.append(hidden_layer_weights[weight_num])
  14. weight_num +=1definit_weights_from_hidden_layer_neurons_to_output_layer_neurons(self, output_layer_weights):
  15. weight_num =0for o inrange(len(self.output_layer.neurons)):for h inrange(len(self.hidden_layer.neurons)):ifnot output_layer_weights:
  16. self.output_layer.neurons[o].weights.append(random.random())else:
  17. self.output_layer.neurons[o].weights.append(output_layer_weights[weight_num])
  18. weight_num +=1definspect(self):print('------')print('* Inputs: {}'.format(self.num_inputs))print('------')print('Hidden Layer')
  19. self.hidden_layer.inspect()print('------')print('* Output Layer')
  20. self.output_layer.inspect()print('------')deffeed_forward(self, inputs):
  21. hidden_layer_outputs = self.hidden_layer.feed_forward(inputs)return self.output_layer.feed_forward(hidden_layer_outputs)deftrain(self, training_inputs, training_outputs):
  22. self.feed_forward(training_inputs)# 1. 输出神经元的值
  23. pd_errors_wrt_output_neuron_total_net_input =[0]*len(self.output_layer.neurons)for o inrange(len(self.output_layer.neurons)):# E/∂z
  24. pd_errors_wrt_output_neuron_total_net_input[o]= self.output_layer.neurons[
  25. o].calculate_pd_error_wrt_total_net_input(training_outputs[o])# 2. 隐含层神经元的值
  26. pd_errors_wrt_hidden_neuron_total_net_input =[0]*len(self.hidden_layer.neurons)for h inrange(len(self.hidden_layer.neurons)):# dE/dy = Σ E/∂z * z/∂y = Σ E/∂z * wᵢⱼ
  27. d_error_wrt_hidden_neuron_output =0for o inrange(len(self.output_layer.neurons)):
  28. d_error_wrt_hidden_neuron_output += pd_errors_wrt_output_neuron_total_net_input[o]* \
  29. self.output_layer.neurons[o].weights[h]# E/∂z = dE/dy * zⱼ/∂
  30. pd_errors_wrt_hidden_neuron_total_net_input[h]= d_error_wrt_hidden_neuron_output * \
  31. self.hidden_layer.neurons[
  32. h].calculate_pd_total_net_input_wrt_input()# 3. 更新输出层权重系数for o inrange(len(self.output_layer.neurons)):for w_ho inrange(len(self.output_layer.neurons[o].weights)):# Eⱼ/∂wᵢⱼ = E/∂z * zⱼ/∂wᵢⱼ
  33. pd_error_wrt_weight = pd_errors_wrt_output_neuron_total_net_input[o]* self.output_layer.neurons[
  34. o].calculate_pd_total_net_input_wrt_weight(w_ho)# Δw = α * Eⱼ/∂w
  35. self.output_layer.neurons[o].weights[w_ho]-= self.LEARNING_RATE * pd_error_wrt_weight
  36. # 4. 更新隐含层的权重系数for h inrange(len(self.hidden_layer.neurons)):for w_ih inrange(len(self.hidden_layer.neurons[h].weights)):# ∂Eⱼ/∂wᵢ = ∂E/∂zⱼ * ∂zⱼ/∂wᵢ
  37. pd_error_wrt_weight = pd_errors_wrt_hidden_neuron_total_net_input[h]* self.hidden_layer.neurons[
  38. h].calculate_pd_total_net_input_wrt_weight(w_ih)# Δw = α * Eⱼ/∂w
  39. self.hidden_layer.neurons[h].weights[w_ih]-= self.LEARNING_RATE * pd_error_wrt_weight
  40. defcalculate_total_error(self, training_sets):
  41. total_error =0for t inrange(len(training_sets)):
  42. training_inputs, training_outputs = training_sets[t]
  43. self.feed_forward(training_inputs)for o inrange(len(training_outputs)):
  44. total_error += self.output_layer.neurons[o].calculate_error(training_outputs[o])return total_error
  45. classNeuronLayer:def__init__(self, num_neurons, bias):# 同一层的神经元共享一个截距项 b
  46. self.bias = bias if bias else random.random()
  47. self.neurons =[]for i inrange(num_neurons):
  48. self.neurons.append(Neuron(self.bias))definspect(self):print('Neurons:',len(self.neurons))for n inrange(len(self.neurons)):print(' Neuron', n)for w inrange(len(self.neurons[n].weights)):print(' Weight:', self.neurons[n].weights[w])print(' Bias:', self.bias)deffeed_forward(self, inputs):
  49. outputs =[]for neuron in self.neurons:
  50. outputs.append(neuron.calculate_output(inputs))return outputs
  51. defget_outputs(self):
  52. outputs =[]for neuron in self.neurons:
  53. outputs.append(neuron.output)return outputs
  54. classNeuron:def__init__(self, bias):
  55. self.bias = bias
  56. self.weights =[]defcalculate_output(self, inputs):
  57. self.inputs = inputs
  58. self.output = self.squash(self.calculate_total_net_input())return self.output
  59. defcalculate_total_net_input(self):
  60. total =0for i inrange(len(self.inputs)):
  61. total += self.inputs[i]* self.weights[i]return total + self.bias
  62. # 激活函数 sigmoiddefsquash(self, total_net_input):return1/(1+ math.exp(-total_net_input))defcalculate_pd_error_wrt_total_net_input(self, target_output):return self.calculate_pd_error_wrt_output(target_output)* self.calculate_pd_total_net_input_wrt_input();# 每一个神经元的误差是由平方差公式计算的defcalculate_error(self, target_output):return0.5*(target_output - self.output)**2defcalculate_pd_error_wrt_output(self, target_output):return-(target_output - self.output)defcalculate_pd_total_net_input_wrt_input(self):return self.output *(1- self.output)defcalculate_pd_total_net_input_wrt_weight(self, index):return self.inputs[index]# 文中的例子:
  63. nn = NeuralNetwork(2,2,2, hidden_layer_weights=[0.15,0.2,0.25,0.3], hidden_layer_bias=0.35,
  64. output_layer_weights=[0.4,0.45,0.5,0.55], output_layer_bias=0.6)
  65. losses =[]for i inrange(1000):
  66. nn.train([0.05,0.1],[0.01,0.09])
  67. losses.append(round(nn.calculate_total_error([[[0.05,0.1],[0.01,0.09]]]),9))
  68. plt.plot(losses)
  69. plt.xlabel("train epoch")
  70. plt.ylabel("train loss")
  71. plt.show()
  72. nn.inspect()

代码运行结果:

  1. ------* Inputs:2------
  2. Hidden Layer
  3. Neurons:2
  4. Neuron 0
  5. Weight:0.2964604103620042
  6. Weight:0.49292082072400834
  7. Bias:0.35
  8. Neuron 1
  9. Weight:0.39084333156627366
  10. Weight:0.5816866631325477
  11. Bias:0.35------* Output Layer
  12. Neurons:2
  13. Neuron 0
  14. Weight:-3.060957226462873
  15. Weight:-3.0308626603447846
  16. Bias:0.6
  17. Neuron 1
  18. Weight:-2.393475400842236
  19. Weight:-2.3602088337272704
  20. Bias:0.6------

在这里插入图片描述
看到上面的结果你是不是想到了放弃,哈哈,没必要的这样。这样确实对于搞AI的太难了,所以大佬们都把复杂的训练的与构造过程集成化,编写了一系列的框架,最常用的以pytorch,paddlepaddle,keras,tensorflow。其中个人认为keras,对新手比较友好。可以用于入门构造网络,但是更深层里的框架还是其他三种更加适合。而keras需要由其他框架比如tensorflow作为后台。
大家可以用按住win+R键,打开运行窗口,输入cmd。
在这里插入图片描述
输入cmd,回车后,会显示如下。
在这里插入图片描述
输入以下的命令,可以看看自己的电脑的显卡是不是NVIDIA。如果是AMD的,那么就安装cpu的吧,毕竟CUDA内核,只支持NVIDIA的显卡。

  1. #AMD显卡
  2. pip install tensorflow-cpu
  3. #NVIDIA显卡
  4. pip install tensorflow
  5. #有了后台以后就安装keras喽
  6. pip install keras
  7. #如果速度慢的话,可以加入清华源的链接
  8. pip install tensorflow-cpu -i https://pypi.tuna.tsinghua.edu.cn/simple/#NVIDIA显卡
  9. pip install tensorflow -i https://pypi.tuna.tsinghua.edu.cn/simple/#有了后台以后就安装keras喽
  10. pip install keras -i https://pypi.tuna.tsinghua.edu.cn/simple/

利用keras构建MLP进行二分类:印第安人糖尿病预测

  1. # 2.1 引入相关模块import warnings
  2. warnings.filterwarnings("ignore")import pandas as pd
  3. import matplotlib.pyplot as plt
  4. from sklearn.model_selection import train_test_split
  5. from keras.models import Sequential
  6. from keras.layers import Dense
  7. # 2.2 准备数据
  8. df = pd.read_csv('pima_data.csv', header=None)
  9. data = df.values
  10. X = data[:,:-1]
  11. y = data[:,-1]from sklearn.preprocessing import MinMaxScaler
  12. scaler = MinMaxScaler()
  13. scaler.fit(X)
  14. X = scaler.transform(X)
  15. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=4)# 2.3 构建网络模型:定义输入层、隐含层、输出层神经元个数,采用的激活函数
  16. model = Sequential()
  17. model.add(Dense(12, input_dim=8, activation='relu'))
  18. model.add(Dense(8, activation='relu'))
  19. model.add(Dense(1, activation='sigmoid'))
  20. model.summary()# 2.4 编译模型:确定损失函数,优化器,以及评估指标
  21. model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])# 2.5 训练模型:确定迭代的次数,批尺寸,是否显示训练过程
  22. model.fit(X_train, y_train, epochs=100, batch_size=20, verbose=True)# 2.6 评估模型
  23. score = model.evaluate(X_test,y_test,verbose=False)print("准确率为:{:.2f}%".format(score[1]*100))

运行结果:

  1. Model:"sequential"
  2. _________________________________________________________________
  3. Layer (type) Output Shape Param # =================================================================
  4. dense (Dense)(None,12)108
  5. dense_1 (Dense)(None,8)104
  6. dense_2 (Dense)(None,1)9=================================================================
  7. Total params:221
  8. Trainable params:221
  9. Non-trainable params:0
  10. _________________________________________________________________
  11. Epoch 1/10027/27[==============================]- 1s 2ms/step - loss:0.6956- accuracy:0.5102
  12. Epoch 2/10027/27[==============================]- 0s 2ms/step - loss:0.6832- accuracy:0.6499
  13. Epoch 3/10027/27[==============================]- 0s 2ms/step - loss:0.6765- accuracy:0.6499
  14. Epoch 4/10027/27[==============================]- 0s 2ms/step - loss:0.6709- accuracy:0.6480
  15. Epoch 5/10027/27[==============================]- 0s 2ms/step - loss:0.6660- accuracy:0.6480
  16. Epoch 6/10027/27[==============================]- 0s 2ms/step - loss:0.6611- accuracy:0.6480
  17. Epoch 7/10027/27[==============================]- 0s 2ms/step - loss:0.6569- accuracy:0.6480
  18. Epoch 8/10027/27[==============================]- 0s 2ms/step - loss:0.6534- accuracy:0.6480
  19. Epoch 9/10027/27[==============================]- 0s 2ms/step - loss:0.6499- accuracy:0.6480
  20. Epoch 10/10027/27[==============================]- 0s 2ms/step - loss:0.6468- accuracy:0.6480
  21. Epoch 11/10027/27[==============================]- 0s 2ms/step - loss:0.6431- accuracy:0.6499
  22. Epoch 12/10027/27[==============================]- 0s 2ms/step - loss:0.6394- accuracy:0.6480
  23. Epoch 13/10027/27[==============================]- 0s 2ms/step - loss:0.6361- accuracy:0.6555
  24. Epoch 14/10027/27[==============================]- 0s 2ms/step - loss:0.6322- accuracy:0.6536
  25. Epoch 15/10027/27[==============================]- 0s 2ms/step - loss:0.6278- accuracy:0.6611
  26. Epoch 16/10027/27[==============================]- 0s 2ms/step - loss:0.6220- accuracy:0.6574
  27. Epoch 17/10027/27[==============================]- 0s 2ms/step - loss:0.6162- accuracy:0.6723
  28. Epoch 18/10027/27[==============================]- 0s 2ms/step - loss:0.6103- accuracy:0.6723
  29. Epoch 19/10027/27[==============================]- 0s 2ms/step - loss:0.6063- accuracy:0.6760
  30. Epoch 20/10027/27[==============================]- 0s 2ms/step - loss:0.6028- accuracy:0.6760
  31. Epoch 21/10027/27[==============================]- 0s 2ms/step - loss:0.5982- accuracy:0.6667
  32. Epoch 22/10027/27[==============================]- 0s 2ms/step - loss:0.5939- accuracy:0.6741
  33. Epoch 23/10027/27[==============================]- 0s 2ms/step - loss:0.5891- accuracy:0.6834
  34. Epoch 24/10027/27[==============================]- 0s 2ms/step - loss:0.5854- accuracy:0.6834
  35. Epoch 25/10027/27[==============================]- 0s 2ms/step - loss:0.5814- accuracy:0.6872
  36. Epoch 26/10027/27[==============================]- 0s 2ms/step - loss:0.5793- accuracy:0.6853
  37. Epoch 27/10027/27[==============================]- 0s 2ms/step - loss:0.5752- accuracy:0.6872
  38. Epoch 28/10027/27[==============================]- 0s 2ms/step - loss:0.5716- accuracy:0.6927
  39. Epoch 29/10027/27[==============================]- 0s 2ms/step - loss:0.5666- accuracy:0.7020
  40. Epoch 30/10027/27[==============================]- 0s 2ms/step - loss:0.5642- accuracy:0.6927
  41. Epoch 31/10027/27[==============================]- 0s 2ms/step - loss:0.5610- accuracy:0.6927
  42. Epoch 32/10027/27[==============================]- 0s 2ms/step - loss:0.5565- accuracy:0.7002
  43. Epoch 33/10027/27[==============================]- 0s 2ms/step - loss:0.5534- accuracy:0.7002
  44. Epoch 34/10027/27[==============================]- 0s 2ms/step - loss:0.5510- accuracy:0.7132
  45. Epoch 35/10027/27[==============================]- 0s 2ms/step - loss:0.5463- accuracy:0.7244
  46. Epoch 36/10027/27[==============================]- 0s 2ms/step - loss:0.5439- accuracy:0.7151
  47. Epoch 37/10027/27[==============================]- 0s 2ms/step - loss:0.5412- accuracy:0.7095
  48. Epoch 38/10027/27[==============================]- 0s 2ms/step - loss:0.5370- accuracy:0.7151
  49. Epoch 39/10027/27[==============================]- 0s 2ms/step - loss:0.5327- accuracy:0.7207
  50. Epoch 40/10027/27[==============================]- 0s 2ms/step - loss:0.5297- accuracy:0.7356
  51. Epoch 41/10027/27[==============================]- 0s 2ms/step - loss:0.5269- accuracy:0.7225
  52. Epoch 42/10027/27[==============================]- 0s 2ms/step - loss:0.5233- accuracy:0.7300
  53. Epoch 43/10027/27[==============================]- 0s 2ms/step - loss:0.5193- accuracy:0.7374
  54. Epoch 44/10027/27[==============================]- 0s 2ms/step - loss:0.5172- accuracy:0.7318
  55. Epoch 45/10027/27[==============================]- 0s 2ms/step - loss:0.5143- accuracy:0.7505
  56. Epoch 46/10027/27[==============================]- 0s 2ms/step - loss:0.5087- accuracy:0.7523
  57. Epoch 47/10027/27[==============================]- 0s 2ms/step - loss:0.5064- accuracy:0.7449
  58. Epoch 48/10027/27[==============================]- 0s 2ms/step - loss:0.5031- accuracy:0.7467
  59. Epoch 49/10027/27[==============================]- 0s 2ms/step - loss:0.5013- accuracy:0.7561
  60. Epoch 50/10027/27[==============================]- 0s 2ms/step - loss:0.4979- accuracy:0.7393
  61. Epoch 51/10027/27[==============================]- 0s 2ms/step - loss:0.4953- accuracy:0.7523
  62. Epoch 52/10027/27[==============================]- 0s 2ms/step - loss:0.4918- accuracy:0.7542
  63. Epoch 53/10027/27[==============================]- 0s 2ms/step - loss:0.4907- accuracy:0.7635
  64. Epoch 54/10027/27[==============================]- 0s 2ms/step - loss:0.4864- accuracy:0.7598
  65. Epoch 55/10027/27[==============================]- 0s 2ms/step - loss:0.4850- accuracy:0.7542
  66. Epoch 56/10027/27[==============================]- 0s 2ms/step - loss:0.4837- accuracy:0.7654
  67. Epoch 57/10027/27[==============================]- 0s 2ms/step - loss:0.4800- accuracy:0.7579
  68. Epoch 58/10027/27[==============================]- 0s 2ms/step - loss:0.4779- accuracy:0.7598
  69. Epoch 59/10027/27[==============================]- 0s 2ms/step - loss:0.4778- accuracy:0.7635
  70. Epoch 60/10027/27[==============================]- 0s 2ms/step - loss:0.4749- accuracy:0.7579
  71. Epoch 61/10027/27[==============================]- 0s 2ms/step - loss:0.4734- accuracy:0.7691
  72. Epoch 62/10027/27[==============================]- 0s 2ms/step - loss:0.4711- accuracy:0.7709
  73. Epoch 63/10027/27[==============================]- 0s 2ms/step - loss:0.4708- accuracy:0.7821
  74. Epoch 64/10027/27[==============================]- 0s 2ms/step - loss:0.4694- accuracy:0.7691
  75. Epoch 65/10027/27[==============================]- 0s 2ms/step - loss:0.4668- accuracy:0.7691
  76. Epoch 66/10027/27[==============================]- 0s 2ms/step - loss:0.4668- accuracy:0.7728
  77. Epoch 67/10027/27[==============================]- 0s 2ms/step - loss:0.4676- accuracy:0.7691
  78. Epoch 68/10027/27[==============================]- 0s 2ms/step - loss:0.4669- accuracy:0.7709
  79. Epoch 69/10027/27[==============================]- 0s 2ms/step - loss:0.4622- accuracy:0.7803
  80. Epoch 70/10027/27[==============================]- 0s 2ms/step - loss:0.4619- accuracy:0.7765
  81. Epoch 71/10027/27[==============================]- 0s 2ms/step - loss:0.4595- accuracy:0.7784
  82. Epoch 72/10027/27[==============================]- 0s 2ms/step - loss:0.4585- accuracy:0.7858
  83. Epoch 73/10027/27[==============================]- 0s 2ms/step - loss:0.4585- accuracy:0.7803
  84. Epoch 74/10027/27[==============================]- 0s 2ms/step - loss:0.4592- accuracy:0.7858
  85. Epoch 75/10027/27[==============================]- 0s 2ms/step - loss:0.4561- accuracy:0.7877
  86. Epoch 76/10027/27[==============================]- 0s 2ms/step - loss:0.4575- accuracy:0.7877
  87. Epoch 77/10027/27[==============================]- 0s 2ms/step - loss:0.4571- accuracy:0.7877
  88. Epoch 78/10027/27[==============================]- 0s 2ms/step - loss:0.4532- accuracy:0.7896
  89. Epoch 79/10027/27[==============================]- 0s 2ms/step - loss:0.4583- accuracy:0.7840
  90. Epoch 80/10027/27[==============================]- 0s 2ms/step - loss:0.4520- accuracy:0.7952
  91. Epoch 81/10027/27[==============================]- 0s 2ms/step - loss:0.4520- accuracy:0.7914
  92. Epoch 82/10027/27[==============================]- 0s 2ms/step - loss:0.4514- accuracy:0.7877
  93. Epoch 83/10027/27[==============================]- 0s 2ms/step - loss:0.4552- accuracy:0.7840
  94. Epoch 84/10027/27[==============================]- 0s 2ms/step - loss:0.4516- accuracy:0.7840
  95. Epoch 85/10027/27[==============================]- 0s 2ms/step - loss:0.4559- accuracy:0.7858
  96. Epoch 86/10027/27[==============================]- 0s 2ms/step - loss:0.4493- accuracy:0.7896
  97. Epoch 87/10027/27[==============================]- 0s 2ms/step - loss:0.4498- accuracy:0.7914
  98. Epoch 88/10027/27[==============================]- 0s 2ms/step - loss:0.4501- accuracy:0.7952
  99. Epoch 89/10027/27[==============================]- 0s 2ms/step - loss:0.4470- accuracy:0.7970
  100. Epoch 90/10027/27[==============================]- 0s 2ms/step - loss:0.4489- accuracy:0.7858
  101. Epoch 91/10027/27[==============================]- 0s 2ms/step - loss:0.4470- accuracy:0.8007
  102. Epoch 92/10027/27[==============================]- 0s 2ms/step - loss:0.4480- accuracy:0.8063
  103. Epoch 93/10027/27[==============================]- 0s 2ms/step - loss:0.4463- accuracy:0.8007
  104. Epoch 94/10027/27[==============================]- 0s 3ms/step - loss:0.4460- accuracy:0.7914
  105. Epoch 95/10027/27[==============================]- 0s 2ms/step - loss:0.4457- accuracy:0.7914
  106. Epoch 96/10027/27[==============================]- 0s 2ms/step - loss:0.4469- accuracy:0.7989
  107. Epoch 97/10027/27[==============================]- 0s 2ms/step - loss:0.4451- accuracy:0.8007
  108. Epoch 98/10027/27[==============================]- 0s 2ms/step - loss:0.4438- accuracy:0.7989
  109. Epoch 99/10027/27[==============================]- 0s 2ms/step - loss:0.4435- accuracy:0.8026
  110. Epoch 100/10027/27[==============================]- 0s 2ms/step - loss:0.4434- accuracy:0.8026
  111. 准确率为:75.76%
  112. Process finished with exit code 0

本文转载自: https://blog.csdn.net/qq_26274961/article/details/123755200
版权归原作者 啥都不懂的小程序猿 所有, 如有侵权,请联系我们删除。

“入门机器学习(西瓜书+南瓜书)神经网络总结(python代码实现)”的评论:

还没有评论