1


高斯朴素贝叶斯分类的原理解释和手写代码实现

Gaussian Naive Bayes (GNB) 是一种基于概率方法和高斯分布的机器学习的分类技术。朴素贝叶斯假设每个参数(也称为特征或预测变量)具有预测输出变量的独立能力。所有参数的预测组合是最终预测,它返回因变量被分类到每个组中的概率,最后的分类被分配给概率较高的分组(类)。

什么是高斯分布?

高斯分布也称为正态分布,是描述自然界中连续随机变量的统计分布的统计模型。正态分布由其钟形曲线定义, 正态分布中两个最重要的特征是均值 (μ) 和标准差 (σ)。平均值是分布的平均值,标准差是分布在平均值周围的“宽度”。

重要的是要知道正态分布的变量 (X) 从 -∞ < X < +∞ 连续分布(连续变量),并且模型曲线下的总面积为 1。

多分类的高斯朴素贝叶斯

导入必要的库:

  1. from random import random
  2. from random import randint
  3. import pandas as pd
  4. import numpy as np
  5. import seaborn as sns
  6. import matplotlib.pyplot as plt
  7. import statistics
  8. from sklearn.model_selection import train_test_split
  9. from sklearn.preprocessing import StandardScaler
  10. from sklearn.naive_bayes import GaussianNB
  11. from sklearn.metrics import confusion_matrix
  12. from mlxtend.plotting import plot_decision_regions

现在创建一个预测变量呈正态分布的数据集。

  1. #Creating values for FeNO with 3 classes:
  2. FeNO_0 = np.random.normal(20, 19, 200)
  3. FeNO_1 = np.random.normal(40, 20, 200)
  4. FeNO_2 = np.random.normal(60, 20, 200)
  5. #Creating values for FEV1 with 3 classes:
  6. FEV1_0 = np.random.normal(4.65, 1, 200)
  7. FEV1_1 = np.random.normal(3.75, 1.2, 200)
  8. FEV1_2 = np.random.normal(2.85, 1.2, 200)
  9. #Creating values for Broncho Dilation with 3 classes:
  10. BD_0 = np.random.normal(150,49, 200)
  11. BD_1 = np.random.normal(201,50, 200)
  12. BD_2 = np.random.normal(251, 50, 200)
  13. #Creating labels variable with three classes:(2)disease (1)possible disease (0)no disease:
  14. not_asthma = np.zeros((200,), dtype=int)
  15. poss_asthma = np.ones((200,), dtype=int)
  16. asthma = np.full((200,), 2, dtype=int)
  17. #Concatenate classes into one variable:
  18. FeNO = np.concatenate([FeNO_0, FeNO_1, FeNO_2])
  19. FEV1 = np.concatenate([FEV1_0, FEV1_1, FEV1_2])
  20. BD = np.concatenate([BD_0, BD_1, BD_2])
  21. dx = np.concatenate([not_asthma, poss_asthma, asthma])
  22. #Create DataFrame:
  23. df = pd.DataFrame()
  24. #Add variables to DataFrame:
  25. df['FeNO'] = FeNO.tolist()
  26. df['FEV1'] = FEV1.tolist()
  27. df['BD'] = BD.tolist()
  28. df['dx'] = dx.tolist()
  29. #Check database:
  30. df

我们的df有 600 行和 4 列。现在我们可以通过可视化检查变量的分布:

  1. fig, axs = plt.subplots(2, 3, figsize=(14, 7))
  2. sns.kdeplot(df['FEV1'], shade=True, color="b", ax=axs[0, 0])
  3. sns.kdeplot(df['FeNO'], shade=True, color="b", ax=axs[0, 1])
  4. sns.kdeplot(df['BD'], shade=True, color="b", ax=axs[0, 2])
  5. sns.distplot( a=df["FEV1"], hist=True, kde=True, rug=False, ax=axs[1, 0])
  6. sns.distplot( a=df["FeNO"], hist=True, kde=True, rug=False, ax=axs[1, 1])
  7. sns.distplot( a=df["BD"], hist=True, kde=True, rug=False, ax=axs[1, 2])
  8. plt.show()

通过人肉的检查,数据似乎接近高斯分布。还可以使用 qq-plots仔细检查:

  1. from statsmodels.graphics.gofplots import qqplot
  2. from matplotlib import pyplot
  3. #q-q plot:
  4. fig, axs = pyplot.subplots(1, 3, figsize=(15, 5))
  5. qqplot(df['FEV1'], line='s', ax=axs[0])
  6. qqplot(df['FeNO'], line='s', ax=axs[1])
  7. qqplot(df['BD'], line='s', ax=axs[2])
  8. pyplot.show()

虽然不是完美的正态分布,但已经很接近了。下面查看的数据集和变量之间的相关性:

  1. #Exploring dataset:
  2. sns.pairplot(df, kind="scatter", hue="dx")
  3. plt.show()

可以使用框线图检查这三组的分布,看看哪些特征可以更好的区分出类别

  1. # plotting both distibutions on the same figure
  2. fig, axs = plt.subplots(2, 3, figsize=(14, 7))
  3. fig = sns.kdeplot(df['FEV1'], hue= df['dx'], shade=True, color="r", ax=axs[0, 0])
  4. fig = sns.kdeplot(df['FeNO'], hue= df['dx'], shade=True, color="r", ax=axs[0, 1])
  5. fig = sns.kdeplot(df['BD'], hue= df['dx'], shade=True, color="r", ax=axs[0, 2])
  6. sns.boxplot(x=df["dx"], y=df["FEV1"], palette = 'magma', ax=axs[1, 0])
  7. sns.boxplot(x=df["dx"], y=df["FeNO"], palette = 'magma',ax=axs[1, 1])
  8. sns.boxplot(x=df["dx"], y=df["BD"], palette = 'magma',ax=axs[1, 2])
  9. plt.show()

手写朴素贝叶斯分类

手写代码并不是让我们重复的制造轮子,而是通过自己编写代码对算法更好的理解。在进行贝叶斯分类之前,先要了解正态分布。

正态分布的数学公式定义了一个观测值出现在某个群体中的概率:

我们可以创建一个函数来计算这个概率:

  1. def normal_dist(x , mean , sd):
  2. prob_density = (1/sd*np.sqrt(2*np.pi)) * np.exp(-0.5*((x-mean)/sd)**2)
  3. return prob_density

知道正态分布公式,就可以计算该样本在三个分组(分类)概率。首先,需要计算所有预测特征和组的均值和标准差:

  1. #Group 0:
  2. group_0 = df[df['dx'] == 0]print('Mean FEV1 group 0: ', statistics.mean(group_0['FEV1']))
  3. print('SD FEV1 group 0: ', statistics.stdev(group_0['FEV1']))
  4. print('Mean FeNO group 0: ', statistics.mean(group_0['FeNO']))
  5. print('SD FeNO group 0: ', statistics.stdev(group_0['FeNO']))
  6. print('Mean BD group 0: ', statistics.mean(group_0['BD']))
  7. print('SD BD group 0: ', statistics.stdev(group_0['BD']))
  8. #Group 1:
  9. group_1 = df[df['dx'] == 1]
  10. print('Mean FEV1 group 1: ', statistics.mean(group_1['FEV1']))
  11. print('SD FEV1 group 1: ', statistics.stdev(group_1['FEV1']))
  12. print('Mean FeNO group 1: ', statistics.mean(group_1['FeNO']))
  13. print('SD FeNO group 1: ', statistics.stdev(group_1['FeNO']))
  14. print('Mean BD group 1: ', statistics.mean(group_1['BD']))
  15. print('SD BD group 1: ', statistics.stdev(group_1['BD']))
  16. #Group 2:
  17. group_2 = df[df['dx'] == 2]
  18. print('Mean FEV1 group 2: ', statistics.mean(group_2['FEV1']))
  19. print('SD FEV1 group 2: ', statistics.stdev(group_2['FEV1']))
  20. print('Mean FeNO group 2: ', statistics.mean(group_2['FeNO']))
  21. print('SD FeNO group 2: ', statistics.stdev(group_2['FeNO']))
  22. print('Mean BD group 2: ', statistics.mean(group_2['BD']))
  23. print('SD BD group 2: ', statistics.stdev(group_2['BD']))

现在,使用一个随机的样本进行测试:FEV1 = 2.75FeNO = 27BD = 125

  1. #Probability for:
  2. #FEV1 = 2.75
  3. #FeNO = 27
  4. #BD = 125
  5. #We have the same number of observations, so the general probability is: 0.33
  6. Prob_geral = round(0.333, 3)
  7. #Prob FEV1:
  8. Prob_FEV1_0 = round(normal_dist(2.75, 4.70, 1.08), 10)
  9. print('Prob FEV1 0: ', Prob_FEV1_0)
  10. Prob_FEV1_1 = round(normal_dist(2.75, 3.70, 1.13), 10)
  11. print('Prob FEV1 1: ', Prob_FEV1_1)
  12. Prob_FEV1_2 = round(normal_dist(2.75, 3.01, 1.22), 10)
  13. print('Prob FEV1 2: ', Prob_FEV1_2)
  14. #Prob FeNO:
  15. Prob_FeNO_0 = round(normal_dist(27, 19.71, 19.29), 10)
  16. print('Prob FeNO 0: ', Prob_FeNO_0)
  17. Prob_FeNO_1 = round(normal_dist(27, 42.34, 19.85), 10)
  18. print('Prob FeNO 1: ', Prob_FeNO_1)
  19. Prob_FeNO_2 = round(normal_dist(27, 61.78, 21.39), 10)
  20. print('Prob FeNO 2: ', Prob_FeNO_2)
  21. #Prob BD:
  22. Prob_BD_0 = round(normal_dist(125, 152.59, 50.33), 10)
  23. print('Prob BD 0: ', Prob_BD_0)
  24. Prob_BD_1 = round(normal_dist(125, 199.14, 50.81), 10)
  25. print('Prob BD 1: ', Prob_BD_1)
  26. Prob_BD_2 = round(normal_dist(125, 256.13, 47.04), 10)
  27. print('Prob BD 2: ', Prob_BD_2)
  28. #Compute probability:
  29. Prob_group_0 = Prob_geral*Prob_FEV1_0*Prob_FeNO_0*Prob_BD_0
  30. print('Prob group 0: ', Prob_group_0)
  31. Prob_group_1 = Prob_geral*Prob_FEV1_1*Prob_FeNO_1*Prob_BD_1
  32. print('Prob group 1: ', Prob_group_1)
  33. Prob_group_2 = Prob_geral*Prob_FEV1_2*Prob_FeNO_2*Prob_BD_2
  34. print('Prob group 2: ', Prob_group_2)

可以看到,这个样本具有属于第 2 组的概率最高。这就是朴素贝叶斯手动计算的的流程,但是这种成熟的算法可以使用来自 Scikit-Learn 的更高效的实现。

Scikit-Learn的分类器样例

Scikit-Learn的GaussianNB为我们提供了更加高效的方法,下面我们使用GaussianNB进行完整的分类实例。首先创建 X 和 y 变量,并执行训练和测试拆分:

  1. #Creating X and y:
  2. X = df.drop('dx', axis=1)
  3. y = df['dx']
  4. #Data split into train and test:
  5. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)

在输入之前还需要使用 standardscaler 对数据进行标准化:

  1. sc = StandardScaler()
  2. X_train = sc.fit_transform(X_train)
  3. X_test = sc.transform(X_test)

现在构建和评估模型:

  1. #Build the model:
  2. classifier = GaussianNB()
  3. classifier.fit(X_train, y_train)
  4. #Evaluate the model:
  5. print("training set score: %f" % classifier.score(X_train, y_train))
  6. print("test set score: %f" % classifier.score(X_test, y_test))

下面使用混淆矩阵来可视化结果:

  1. # Predicting the Test set results
  2. y_pred = classifier.predict(X_test)
  3. #Confusion Matrix:
  4. cm = confusion_matrix(y_test, y_pred)
  5. print(cm)

通过混淆矩阵可以看到,的模型最适合预测类别 0,但类别 1 和 2 的错误率很高。为了查看这个问题,我们使用变量构建决策边界图:

  1. df.to_csv('data.csv', index = False)
  2. data = pd.read_csv('data.csv')
  3. def gaussian_nb_a(data):
  4. x = data[['BD','FeNO',]].values
  5. y = data['dx'].astype(int).values
  6. Gauss_nb = GaussianNB()
  7. Gauss_nb.fit(x,y)
  8. print(Gauss_nb.score(x,y))
  9. #Plot decision region:
  10. plot_decision_regions(x,y, clf=Gauss_nb, legend=1)
  11. #Adding axes annotations:
  12. plt.xlabel('X_train')
  13. plt.ylabel('y_train')
  14. plt.title('Gaussian Naive Bayes')
  15. plt.show()
  16. def gaussian_nb_b(data):
  17. x = data[['BD','FEV1',]].values
  18. y = data['dx'].astype(int).values
  19. Gauss_nb = GaussianNB()
  20. Gauss_nb.fit(x,y)
  21. print(Gauss_nb.score(x,y))
  22. #Plot decision region:
  23. plot_decision_regions(x,y, clf=Gauss_nb, legend=1)
  24. #Adding axes annotations:
  25. plt.xlabel('X_train')
  26. plt.ylabel('y_train')
  27. plt.title('Gaussian Naive Bayes')
  28. plt.show()
  29. def gaussian_nb_c(data):
  30. x = data[['FEV1','FeNO',]].values
  31. y = data['dx'].astype(int).values
  32. Gauss_nb = GaussianNB()
  33. Gauss_nb.fit(x,y)
  34. print(Gauss_nb.score(x,y))
  35. #Plot decision region:
  36. plot_decision_regions(x,y, clf=Gauss_nb, legend=1)
  37. #Adding axes annotations:
  38. plt.xlabel('X_train')
  39. plt.ylabel('y_train')
  40. plt.title('Gaussian Naive Bayes')
  41. plt.show()
  42. gaussian_nb_a(data)
  43. gaussian_nb_b(data)
  44. gaussian_nb_c(data)

通过决策边界我们可以观察到分类错误的原因,从图中我们看到,很多点都是落在决策边界之外的,如果是实际数据我们需要分析具体原因,但是因为是测试数据所以我们也不需要更多的分析。

“高斯朴素贝叶斯分类的原理解释和手写代码实现”的评论:

还没有评论