0


【人工智能课程】计算机科学博士作业二

使用TensorFlow1.x版本来实现手势识别任务中,并用图像增强的方式改进,基准训练准确率0.92,测试准确率0.77,改进后,训练准确率0.97,测试准确率0.88。

1 导入包

import math
import warnings
warnings.filterwarnings("ignore")import numpy as np
import h5py
import matplotlib.pyplot as plt

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()%matplotlib inline
np.random.seed(1)

2 读取数据集

defload_dataset():
    train_dataset = h5py.File('datasets/train_signs.h5',"r")
    train_set_x_orig = np.array(train_dataset["train_set_x"][:])# 训练集特征
    train_set_y_orig = np.array(train_dataset["train_set_y"][:])# 训练集标签

    test_dataset = h5py.File('datasets/test_signs.h5',"r")
    test_set_x_orig = np.array(test_dataset["test_set_x"][:])# 测试集特征
    test_set_y_orig = np.array(test_dataset["test_set_y"][:])# 测试集标签
    classes = np.array(test_dataset["list_classes"][:])# 类别列表

    train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
    test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
#转变成one-hot编码defconvert_to_one_hot(Y, C):
    Y = np.eye(C)[Y.reshape(-1)].T
    return Y
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
index =6
plt.imshow(X_train_orig[index])print("y = "+str(np.squeeze(Y_train_orig[:, index])))

在这里插入图片描述

X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig,6).T
Y_test = convert_to_one_hot(Y_test_orig,6).T
print("number of training examples = "+str(X_train.shape[0]))print("number of test examples = "+str(X_test.shape[0]))print("X_train shape: "+str(X_train.shape))print("Y_train shape: "+str(Y_train.shape))print("X_test shape: "+str(X_test.shape))print("Y_test shape: "+str(Y_test.shape))
conv_layers ={}

number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)

3 创建占位符

① TensorFlow要求您为运行会话时将输入到模型中的输入数据创建占位符。

② 现在要实现创建占位符的函数,因为使用的是小批量数据块,输入的样本数量可能不固定,所以在数量那里要使用None作为可变数量。

defcreate_placeholders(n_H0, n_W0, n_C0, n_y):"""
    为session创建占位符
    
    参数:
        n_H0 - 实数,输入图像的高度
        n_W0 - 实数,输入图像的宽度
        n_C0 - 实数,输入的通道数
        n_y  - 实数,分类数
        
    输出:
        X - 输入数据的占位符,维度为[None, n_H0, n_W0, n_C0],类型为"float"
        Y - 输入数据的标签的占位符,维度为[None, n_y],维度为"float"
    """
    X = tf.placeholder(tf.float32,[None, n_H0, n_W0, n_C0])
    Y = tf.placeholder(tf.float32,[None, n_y])return X,Y
# 测试
X , Y = create_placeholders(64,64,3,6)print("X = "+str(X))print("Y = "+str(Y))

X = Tensor(“Placeholder:0”, shape=(?, 64, 64, 3), dtype=float32)
Y = Tensor(“Placeholder_1:0”, shape=(?, 6), dtype=float32)

4 初始化参数

① 现在将使用 tf.contrib.layers.xavier_initializer(seed = 0) 来初始化权值/过滤器

     W 
    
   
     1 
    
   
     、 
    
   
     W 
    
   
     2 
    
   
  
    W1、W2 
   
  
W1、W2

② 在这里,不需要考虑偏置,因为TensorFlow会考虑到的。
③ 只需要初始化为2D卷积函数,全连接层TensorFlow会自动初始化的。

definitialize_parameters():'''
    第一层卷积层的过滤器组:W1
    第二层卷积层的过滤器组:W2
    '''#采用he初始化
    initializer = tf.keras.initializers.glorot_normal()
    W1 = tf.compat.v1.Variable(initializer([4,4,3,8]))
    W2 = tf.compat.v1.Variable(initializer([2,2,8,16]))
    
    parameters ={'W1':W1,'W2':W2
    }return parameters

# 测试
tf.reset_default_graph()with tf.Session()as sess_test:
    parameters = initialize_parameters()
    init = tf.global_variables_initializer()
    sess_test.run(init)print("W1 = "+str(parameters["W1"].eval()[1,1,1]))print("W2 = "+str(parameters["W2"].eval()[1,1,1]))
    sess_test.close()

5 前向传播模型

① 在TensorFlow里面有一些可以直接拿来用的函数:

  • tf.nn.conv2d(X,W1,strides=[1,s,s,1],padding=‘SAME’):给定输入 X X X和一组过滤器 W 1 W 1 W1,这个函数将会自动使用 W 1 W1 W1来对 X X X进行卷积,第三个输入参数是**[1,s,s,1]**是指对于输入 (m, n_H_prev, n_W_prev, n_C_prev)而言,每次滑动的步伐。
  • tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = ‘SAME’):给定输入 X X X,该函数将会使用大小为(f,f)以及步伐为(s,s)的窗口对其进行滑动取最大值。
  • tf.nn.relu(Z1):计算Z1的ReLU激活。
  • tf.contrib.layers.flatten§:给定一个输入P,此函数将会把每个样本转化成一维的向量,然后返回一个tensor变量,其维度为(batch_size,k)。
  • tf.contrib.layers.fully_connected(F, num_outputs):给定一个已经一维化了的输入F,此函数将会返回一个由全连接层计算过后的输出。

② 使用tf.contrib.layers.fully_connected(F, num_outputs)的时候,全连接层会自动初始化权值且在你训练模型的时候它也会一直参与,所以当初始化参数的时候不需要专门去初始化它的权值。
① 实现前向传播的时候,需要定义一下模型的大概样子:

  • CONV2D→RELU→MAXPOOL→CONV2D→RELU→MAXPOOL→FULLCONNECTED

② 具体实现的时候,需要使用以下的步骤和参数:

  • Conv2d : 步伐:1,填充方式:“SAME”
  • ReLU
  • Max pool : 过滤器大小:8x8,步伐:8x8,填充方式:“SAME”
  • Conv2d : 步伐:1,填充方式:“SAME”
  • ReLU
  • Max pool : 过滤器大小:4x4,步伐:4x4,填充方式:“SAME”
  • 一维化上一层的输出
  • 全连接层(FC):使用没有非线性激活函数的全连接层。这里不要调用SoftMax, 这将导致输出层中有6个神经元,然后再传递到softmax。 在TensorFlow中,softmax和cost函数被集中到一个函数中,在计算成本时您将调用不同的函数。
defforward_propagation(X,parameters):'''
    CONV2D->RELU->MAXPOOL->CONV2D->RELU->MAXPOOL->FLATTEN->FULLYCONNECTED
    '''
    W1,W2 = parameters['W1'],parameters['W2']#SAME卷积
    Z1 = tf.nn.conv2d(X,W1,strides=[1,1,1,1],padding="SAME")#通过激活函数
    A1 = tf.nn.relu(Z1)#最大池化
    P1 = tf.nn.max_pool(A1,ksize=[1,8,8,1],strides=[1,8,8,1],padding="SAME")#第二次SAME卷积
    Z2 = tf.nn.conv2d(P1,W2,strides=[1,1,1,1],padding="SAME")#经过激活函数
    A2 = tf.nn.relu(Z2)#最大池化
    P2 = tf.nn.max_pool(A2,ksize=[1,4,4,1],strides=[1,4,4,1],padding="SAME")#平铺卷积结构
    P = tf.compat.v1.layers.flatten(P2)#经过一个全连接层
    Z3 = tf.compat.v1.layers.dense(P,6)return Z3
print("=====测试一下=====")
tf.reset_default_graph()
np.random.seed(1)with tf.Session()as sess_test:
    X,Y = create_placeholders(64,64,3,6)
    parameters = initialize_parameters()
    Z3 = forward_propagation(X,parameters)
    
    init = tf.global_variables_initializer()
    sess_test.run(init)
    
    a = sess_test.run(Z3,{X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})print("Z3 = "+str(a))
    
    sess_test.close()

6 定义损失函数

defcompute_cost(Z3,Y):
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3,labels=Y))return cost

defrandom_mini_batches(X, Y, mini_batch_size =64):"""
    Creates a list of random minibatches from (X, Y)
    Arguments:
    X -- input data, of shape (input size, number of examples) (m, Hi, Wi, Ci)
    Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) (m, n_y)
    mini_batch_size - size of the mini-batches, integer
    seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours.
    Returns:
    mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
    """
    m = X.shape[0]# number of training examples
    mini_batches =[]# Step 1: Shuffle (X, Y)
    permutation =list(np.random.permutation(m))
    
    shuffled_X = X[permutation,:,:,:]
    shuffled_Y = Y[permutation,:]# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
    num_complete_minibatches = math.floor(m/mini_batch_size)# number of mini batches of size mini_batch_size in your partitionningfor k inrange(0, num_complete_minibatches):
        mini_batch_X = shuffled_X[k * mini_batch_size : k * mini_batch_size + mini_batch_size,:,:,:]
        mini_batch_Y = shuffled_Y[k * mini_batch_size : k * mini_batch_size + mini_batch_size,:]
        mini_batch =(mini_batch_X, mini_batch_Y)
        mini_batches.append(mini_batch)# Handling the end case (last mini-batch < mini_batch_size)if m % mini_batch_size !=0:
        mini_batch_X = shuffled_X[num_complete_minibatches * mini_batch_size : m,:,:,:]
        mini_batch_Y = shuffled_Y[num_complete_minibatches * mini_batch_size : m,:]
        mini_batch =(mini_batch_X, mini_batch_Y)
        mini_batches.append(mini_batch)return mini_batches

7 模型训练

7.1 方法一

defmodel(X_train,Y_train,X_test,Y_test,learning_rate=0.009,epochs=100,mini_batch_size=64):# tf.random.set_seed(1)
    tf.random.set_random_seed(1)#获取输入维度
    m,n_h0,n_w0,n_c0 = X_train.shape
    #分类数
    C = Y_train.shape[1]

    costs =[]#为输入输出创建palcehoder
    X,Y = create_placeholders(n_h0,n_w0,n_c0,C)#初始化变量filter
    parameters = initialize_parameters()#前向传播
    Z3 = forward_propagation(X,parameters)

    cost = compute_cost(Z3,Y)#创建优化器(即梯度下降的过程)
    optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)#初始化所有变量
    init = tf.compat.v1.global_variables_initializer()with tf.compat.v1.Session()as sess:
        sess.run(init)for epoch inrange(epochs):
            epoch_cost =0
            mini_batch_num = m//mini_batch_size
            mini_batchs = random_mini_batches(X_train, Y_train, mini_batch_size)for mini in mini_batchs:(mini_x,mini_y)= mini

                #执行优化器/梯度下降
                _,mini_batch_cost = sess.run([optimizer,cost],feed_dict={X:mini_x,Y:mini_y})

                epoch_cost = epoch_cost + mini_batch_cost/mini_batch_num

            if epoch%5==0:
                costs.append(epoch_cost)if epoch%5==0:print("当前是第 "+str(epoch)+" 代,成本值为:"+str(epoch_cost))
        plt.plot(costs)
        plt.ylabel('cost')
        plt.xlabel('epoch')
        plt.show()#保存参数到seseeion
        parameters = sess.run(parameters)#获取预测正确的样本下标
        correct_prediction = tf.equal(tf.argmax(Z3,axis=1),tf.argmax(Y,axis=1))

        accuracy = tf.compat.v1.reduce_mean(tf.cast(correct_prediction,"float"))print("训练集的准确率:", accuracy.eval({X: X_train, Y: Y_train}))print("测试集的准确率:", accuracy.eval({X: X_test, Y: Y_test}))return parameters
import time
start_time = time.perf_counter()
parameters = model_aug(X_train, Y_train, X_test, Y_test,learning_rate=0.007,epochs=200,mini_batch_size=64)
end_time = time.perf_counter()print("CPU的执行时间 = "+str(end_time - start_time)+" 秒")

epoch100

训练集的准确率: 0.92314816

测试集的准确率: 0.775

CPU的执行时间 = 56.44441370000004 秒
在这里插入图片描述

7.2 方法二

用图像增强的方法改进,图像增强的功能,包括随机水平翻转、随机亮度调整和随机对比度调整。通过随机翻转增加了数据的多样性,而随机亮度和对比度的调整则可以使模型更具鲁棒性。

# 定义模型函数defmodel_aug(X_train, Y_train, X_test, Y_test, learning_rate=0.009, epochs=100, mini_batch_size=64):
    tf.random.set_random_seed(1)#获取输入维度
    m,n_h0,n_w0,n_c0 = X_train.shape
    #分类数
    C = Y_train.shape[1]
    
    costs =[]#为输入输出创建palcehoder
    X,Y = create_placeholders(n_h0,n_w0,n_c0,C)#初始化变量filter
    parameters = initialize_parameters()#前向传播
    Z3 = forward_propagation(X,parameters)
    cost = compute_cost(Z3,Y)#创建优化器(即梯度下降的过程)
    optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)'''
    改进部分,图像增强
    '''# 图像增强部分defimage_augmentation(image, label):
        image = tf.image.random_flip_left_right(image)
        image = tf.image.random_brightness(image, max_delta=0.2)# 随机亮度
        image = tf.image.random_contrast(image, lower=0.8, upper=1.2)# 随机对比度return image, label
    
    # 创建数据集,应用数据增强,并批量获取数据
    dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train))
    dataset = dataset.map(image_augmentation)
    dataset = dataset.batch(mini_batch_size).repeat()# 定义迭代器
    iterator = dataset.make_initializable_iterator()
    next_batch = iterator.get_next()# 在会话中初始化迭代器with tf.Session()as sess:
        sess.run(tf.global_variables_initializer())
        sess.run(iterator.initializer)for epoch inrange(epochs):
            epoch_cost =0.
            mini_batch_num = m // mini_batch_size
            # 使用 `next_batch` 函数替代原始的 mini_batchs 获取数据for _ inrange(mini_batch_num):
                mini_x, mini_y = sess.run(next_batch)
                _, mini_batch_cost = sess.run([optimizer, cost], feed_dict={X: mini_x, Y: mini_y})
                epoch_cost += mini_batch_cost / mini_batch_num
            if epoch%5==0:
                    costs.append(epoch_cost)if epoch%5==0:print("当前是第 "+str(epoch)+" 代,成本值为:"+str(epoch_cost))
        plt.plot(costs)
        plt.ylabel('cost')
        plt.xlabel('epoch')
        plt.show()#保存参数到seseeion
        parameters = sess.run(parameters)#获取预测正确的样本下标
        correct_prediction = tf.equal(tf.argmax(Z3,axis=1),tf.argmax(Y,axis=1))

        accuracy = tf.compat.v1.reduce_mean(tf.cast(correct_prediction,"float"))print("训练集的准确率:", accuracy.eval({X: X_train, Y: Y_train}))print("测试集的准确率:", accuracy.eval({X: X_test, Y: Y_test}))return parameters
import time
start_time = time.perf_counter()
parameters = model_aug(X_train, Y_train, X_test, Y_test,learning_rate=0.007,epochs=200,mini_batch_size=64)
end_time = time.perf_counter()print("CPU的执行时间 = "+str(end_time - start_time)+" 秒")

model_aug
epoch 200
训练集的准确率: 0.97037035
测试集的准确率: 0.8833333
CPU的执行时间 = 84.42098270000002 秒
在这里插入图片描述


本文转载自: https://blog.csdn.net/weixin_43935696/article/details/135931574
版权归原作者 Better Bench 所有, 如有侵权,请联系我们删除。

“【人工智能课程】计算机科学博士作业二”的评论:

还没有评论