0


精简模型,提升效能:线性回归中的特征选择技巧

在本文中,我们将探讨各种特征选择方法和技术,用以在保持模型评分可接受的情况下减少特征数量。通过减少噪声和冗余信息,模型可以更快地处理,并减少复杂性。

我们将使用所有特征作为基础模型。然后将执行各种特征选择技术,以确定保留和删除的最佳特征,同时不显著牺牲评分(R2 分数)。使用的方法包括:

  • 相关性矩阵
  • 检查方差膨胀因子(VIF)
  • Lasso作为特征选择方法
  • Select K-Best(f_regression 和 mutual_info_regression)
  • 递归特征消除(RFE)
  • 顺序前向/后向特征选择

数据集

我们将从汽车数据集开始,该数据集包含七个特征,并将“mpg”(每加仑行驶英里数)列设置为我们的目标变量。

  1. import pandas as pd
  2. pd.set_option('display.max_colwidth', None) # Show full content of each column
  3. url = "https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data"
  4. column_names = ["mpg", "cylinders", "displacement", "horsepower", "weight", "acceleration", "model year", "origin", "car name"]
  5. df = pd.read_csv(url, names=column_names, delim_whitespace=True, na_values='?')
  6. # drop null
  7. df = df.dropna()
  8. df = df.drop(columns='car name')
  9. print(df.shape)
  10. df.head()

数据集还需要做一些预处理,我们先处理一下异常值

  1. # Function to count outliers in each column
  2. def count_outliers(df):
  3. outlier_counts = {}
  4. for col in df.columns:
  5. if df[col].dtype != 'object': # Exclude non-numeric columns
  6. Q1 = df[col].quantile(0.25)
  7. Q3 = df[col].quantile(0.75)
  8. IQR = Q3 - Q1
  9. lower_bound = Q1 - 1.5 * IQR
  10. upper_bound = Q3 + 1.5 * IQR
  11. lower_bound_outliers = df[df[col] < lower_bound]
  12. upper_bound_outliers = df[df[col] > upper_bound]
  13. total_outliers = len(lower_bound_outliers) + len(upper_bound_outliers)
  14. outlier_counts[col] = total_outliers
  15. return outlier_counts
  16. count_outliers(df)

结果如下:

  1. {'mpg': 0,
  2. 'cylinders': 0,
  3. 'displacement': 0,
  4. 'horsepower': 10,
  5. 'weight': 0,
  6. 'acceleration': 11,
  7. 'model year': 0,
  8. 'origin': 0}

“horsepower”和“acceleration”有几个异常值。

  1. import numpy as np
  2. import warnings
  3. warnings.filterwarnings("ignore")
  4. def replace_outliers_with_mean(df):
  5. for col in df.columns:
  6. if df[col].dtype != 'object': # Exclude non-numeric columns
  7. Q1 = df[col].quantile(0.25)
  8. Q3 = df[col].quantile(0.75)
  9. IQR = Q3 - Q1
  10. lower_bound = Q1 - 1.5 * IQR
  11. upper_bound = Q3 + 1.5 * IQR
  12. # Identify outliers
  13. lower_bound_outliers = df[col] < lower_bound
  14. upper_bound_outliers = df[col] > upper_bound
  15. # Replace outliers with the column mean
  16. col_mean = df[col].mean()
  17. df[col][lower_bound_outliers | upper_bound_outliers] = col_mean
  18. return df
  19. df = replace_outliers_with_mean(df)
  20. count_outliers(df) # run multiple times according to desired result (zero outliers)

这样异常值就没有了

  1. {'mpg': 0,
  2. 'cylinders': 0,
  3. 'displacement': 0,
  4. 'horsepower': 0,
  5. 'weight': 0,
  6. 'acceleration': 0,
  7. 'model year': 0,
  8. 'origin': 0}

现在数据集已经清理完毕,可以特征选择方法了。

检验相关矩阵

通过查看相关矩阵,我们可以明确哪些特征与目标变量(如每加仑行驶英里数)有强相关性,这有助于预测。同时,这也帮助我们识别那些相互之间关联度高的特征,可能需要从模型中移除一些以避免多重共线性,从而改善模型的性能和准确性。

  1. # Correlation Matrix
  2. import matplotlib.pyplot as plt
  3. import seaborn as sns
  4. ax2= plt.figure(figsize=(8,5))
  5. ax2=sns.heatmap(df.corr(), annot=True, fmt='.3', cmap='RdBu_r')
  6. plt.title('Features Heatmap')
  7. ax2=plt.show()

相关性矩阵表明,cylinders, displacement, horsepower, weight与我们的目标变量(MPG)呈强烈负相关,而车型年份和产地则显示出轻微的正相关。

这有助于我们识别那些对目标变量影响较大的特征,从而在特征选择时做出更明智的决策。

我们先做一个全特征的基础模型:

  1. from sklearn.model_selection import train_test_split
  2. from sklearn.preprocessing import StandardScaler
  3. from sklearn.linear_model import LinearRegression
  4. y = df['mpg']
  5. # Select predictor variables
  6. X_base = df.drop(columns=['mpg'])
  7. # Linear regression function
  8. def train_and_evaluate_linear_regression(X, y, test_size=0.3, random_state=42):
  9. # Split the dataset into training and testing sets
  10. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state)
  11. # Normalize the features
  12. scaler = StandardScaler()
  13. X_train = scaler.fit_transform(X_train)
  14. X_test = scaler.transform(X_test)
  15. # Initialize and fit the Linear Regression model
  16. lr = LinearRegression()
  17. lr.fit(X_train, y_train)
  18. # Evaluate the model
  19. train_score = lr.score(X_train, y_train)
  20. test_score = lr.score(X_test, y_test)
  21. return train_score, test_score

训练:

  1. train_and_evaluate_linear_regression(X_base,y)

结果如下:

  1. (0.8451296595927265, 0.8233345996149848)

这个基础模型,包含所有七个选定的特征,输出的训练分数为0.845,测试分数为0.823。现在,让我们看看是否能在保持或甚至提高这个分数的同时,减少特征的数量。

方差膨胀因子(VIF)

VIF 表示特定特征与数据集中其他特征的相关程度。高 VIF 值表明该特征具有高度的多重共线性,可能是冗余的。通过分析 VIF,我们可以识别并考虑从模型中移除那些可能对模型预测能力影响不大的冗余特征,从而优化模型的性能和准确性。

  1. from statsmodels.stats.outliers_influence import variance_inflation_factor
  2. # VIF Score fucntion
  3. def standardize_and_calculate_vif(df):
  4. # Standardize the features
  5. scaler = StandardScaler()
  6. df_standardized = pd.DataFrame(scaler.fit_transform(df), columns=df.columns)
  7. # Calculate VIF
  8. vif = pd.DataFrame()
  9. vif['features'] = df_standardized.columns
  10. vif['VIF_Values'] = [variance_inflation_factor(df_standardized.values, i) for i in range(df_standardized.shape[1])]
  11. # Sort by VIF_Values in descending order
  12. vif = vif.sort_values(by='VIF_Values', ascending=False).reset_index(drop=True)
  13. return vif
  14. standardize_and_calculate_vif(df.drop(columns='mpg'))

具有高 VIF 值的特征通常是改善模型准确性的候选特征,可考虑移除。通过减少这些特征,可以降低模型的复杂性,提高其泛化能力,在不牺牲模型性能的前提下,使模型更加简洁有效。

  1. # seleced Features according to VIF values
  2. X_vif = df[[
  3. 'model year',
  4. 'origin',
  5. 'acceleration',
  6. 'horsepower',
  7. 'weight',
  8. # 'cylinders', # removed
  9. # 'displacement', # removed
  10. ]]

继续调用上面我们写好的训练函数

  1. train_and_evaluate_linear_regression(X_vif,y)

结果如下:

  1. (0.8431221864763683, 0.8256739410002708)

可以看到训练集分数差别不到,而测试集则有一些增长,说明我们去掉特征后模型的鲁棒性(泛化)得到了提高

Lasso作为特征选择

Lasso回归通常用于正则化,以防止过拟合,这种情况下的模型可能在训练数据上得分很高,但在未见过的测试数据上表现不佳。Lasso还可以作为一种特征选择技术,通过将系数缩减至零,帮助识别最重要的预测变量。这种方法不仅能有效减少模型中的特征数量,还能帮助我们集中关注那些对目标变量有实质性影响的特征。

  1. from sklearn.linear_model import Lasso
  2. from sklearn.metrics import r2_score
  3. import matplotlib.pyplot as plt
  4. # codes Plot the coefficients
  5. y = df['mpg']
  6. # Select predictor variables
  7. X = df.drop(columns=['mpg'])
  8. # Split the dataset into training and testing sets
  9. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
  10. # Initialize and fit the Lasso model
  11. lasso = Lasso(alpha=0.1)
  12. lasso.fit(X_train, y_train)
  13. # Get the coefficients of the features
  14. coefficients = lasso.coef_
  15. # Plot the coefficients
  16. plt.figure(figsize=(7, 3))
  17. plt.bar(X.columns, coefficients)
  18. plt.xlabel('Features')
  19. plt.ylabel('Coefficient Value')
  20. plt.title('Feature Coefficients using Lasso Regression')
  21. plt.xticks(rotation=20)
  22. plt.tight_layout()
  23. plt.show()

我们把最小的weight和displacement去除

  1. # seleced Features according to lasso coeffiennt value
  2. X_lasso = df[[
  3. 'model year',
  4. 'origin',
  5. 'acceleration',
  6. 'horsepower',
  7. # 'weight', # removed
  8. 'cylinders',
  9. # 'displacement', # removed
  10. ]]

训练

  1. train_and_evaluate_linear_regression(X_lasso,y)

结果如下:

  1. (0.7881176469410339, 0.7675541084603061)

可以看到效果并不是很理想,这是因为Lasso没有考虑到多重共线性的问题。

Select K-Best

Select K-Best有两种方法

1、f_regression

使用f_regression进行特征选择时,方法会计算每个特征与目标变量之间的相关性程度,并通过F统计量来衡量这种关联的强度。这种方法特别适合于处理连续的特征和目标变量,能够有效地识别出对预测目标变量最有用的特征。选择F统计值最高的K个特征,可以帮助构建一个既简洁又有效的模型。

  1. from sklearn.feature_selection import SelectKBest, f_regression
  2. # all features
  3. X = df[[
  4. 'model year',
  5. 'origin',
  6. 'acceleration',
  7. 'horsepower',
  8. 'weight',
  9. 'cylinders',
  10. 'displacement',
  11. ]]
  12. # K best scoring function
  13. def K_best_score_list(score_func):
  14. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
  15. # normalize
  16. scaler = StandardScaler()
  17. X_train = scaler.fit_transform(X_train)
  18. X_test = scaler.transform(X_test)
  19. selector = SelectKBest(score_func, k='all')
  20. x_train_kbest = selector.fit_transform(X_train, y_train)
  21. x_test_kbest = selector.transform(X_test)
  22. feature_scores = pd.DataFrame({'Feature': X.columns,
  23. 'Score': selector.scores_,
  24. 'p-Value': selector.pvalues_})
  25. feature_scores = feature_scores.sort_values(by='Score', ascending=False)
  26. return feature_scores

训练:

  1. K_best_score_list(f_regression)

得分高表明该特征与目标变量高度相关

  1. # function to evaluate N number of features on R2 score. The features will be selected according to F-score
  2. def evaluate_features(X, y, score_func):
  3. # Split the data into training and testing sets
  4. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
  5. # normalize
  6. scaler = StandardScaler()
  7. X_train = scaler.fit_transform(X_train)
  8. X_test = scaler.transform(X_test)
  9. f_regression_list = []
  10. selected_features_list = []
  11. for k in range(1, len(X.columns) + 1):
  12. selector = SelectKBest(score_func, k=k)
  13. x_train_kbest = selector.fit_transform(X_train, y_train)
  14. x_test_kbest = selector.transform(X_test)
  15. lr = LinearRegression()
  16. lr.fit(x_train_kbest, y_train)
  17. y_preds_kbest = lr.predict(x_test_kbest)
  18. # Calculate the r2_score as an example of performance evaluation
  19. r2_score_kbest = lr.score(x_test_kbest, y_test)
  20. f_regression_list.append(r2_score_kbest)
  21. # Get selected feature names
  22. selected_feature_mask = selector.get_support()
  23. selected_features = X.columns[selected_feature_mask].tolist()
  24. selected_features_list.append(selected_features)
  25. x = np.arange(1, len(X.columns) + 1)
  26. result_df = pd.DataFrame({'k': x, 'r2_score_test_data': f_regression_list, 'selected_features': selected_features_list})
  27. return result_df

评估f_regression特征在R2上的得分

  1. evaluate_features(X, y, f_regression)

2、mutual_info_regression

使用互信息回归(mutual_info_regression)进行特征选择时,该方法会评估每个特征与目标变量之间的信息共享量。互信息得分高意味着特征与目标变量之间的关系更为密切,这种特征对于预测目标变量非常重要。通过选择互信息得分最高的K个特征,我们可以确保模型包含最有影响力的特征,从而提高模型的预测能力和准确性。

  1. from sklearn.feature_selection import mutual_info_regression
  2. K_best_score_list(mutual_info_regression)

  1. evaluate_features(X,y, mutual_info_regression)

递归特征消除(RFE)

递归特征消除(RFE)通过迭代方式从模型中去除较不重要的特征,评估这些特征对模型性能的影响。它通常依赖于模型系数或特征重要性等指标来决定每次迭代中应去除哪些特征。这一迭代过程持续进行,直到剩下所需数量的特征,确保最终模型中仅保留最相关的预测因子。这种方法有助于优化模型的结构,确保模型的效率和准确性。

  1. from sklearn.feature_selection import RFE
  2. # all features
  3. X = df[[
  4. 'cylinders',
  5. 'weight',
  6. 'model year',
  7. 'displacement',
  8. 'acceleration',
  9. 'horsepower',
  10. 'origin'
  11. ]]
  12. # function evaluate_rfe_features(X, y)
  13. def evaluate_rfe_features(X, y):
  14. # Split the data into training and testing sets
  15. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
  16. # normalize
  17. scaler = StandardScaler()
  18. X_train = scaler.fit_transform(X_train)
  19. X_test = scaler.transform(X_test)
  20. r2_score_list = []
  21. selected_features_list = []
  22. for k in range(1, len(X.columns) + 1):
  23. lr = LinearRegression()
  24. rfe = RFE(estimator=lr, n_features_to_select=k)
  25. x_train_rfe = rfe.fit_transform(X_train, y_train)
  26. x_test_rfe = rfe.transform(X_test)
  27. lr.fit(x_train_rfe, y_train)
  28. # y_preds_rfe = lr.predict(x_test_rfe)
  29. # Calculate the r2_score as an example of performance evaluation
  30. r2_score_rfe = lr.score(x_test_rfe, y_test)
  31. r2_score_list.append(r2_score_rfe)
  32. # Get selected feature names
  33. selected_feature_mask = rfe.get_support()
  34. selected_features = X.columns[selected_feature_mask].tolist()
  35. selected_features_list.append(selected_features)
  36. x = np.arange(1, len(X.columns) + 1)
  37. result_df = pd.DataFrame({'k': x, 'r2_score': r2_score_list, 'selected_features': selected_features_list})
  38. return result_df

测试:

  1. evaluate_rfe_features(X, y)

顺序前向和后向选择

  • 顺序前向选择(SFS):从一个空的特征集开始,逐步一次添加一个特征到模型中,每一步都选择能最大提高模型性能的特征。
  • 顺序后向选择(SBS):从包含所有特征的模型开始,每一步去除一个特征,选择其移除对模型性能影响最小的,直到满足停止标准为止。

这两种方法都是通过迭代的方式精细调整特征集,以达到最佳的模型性能。顺序前向选择适用于从少量特征开始逐步构建模型,而顺序后向选择则适用于从一个全特征模型开始逐步简化。这两种方法都能有效地帮助确定哪些特征对预测目标变量最为重要,从而使得模型既精简又有效。

  1. from mlxtend.feature_selection import SequentialFeatureSelector as SFS
  2. from sklearn.metrics import r2_score
  3. def feature_selection_with_sfs_sbs(X, y, test_size=0.3, random_state=42, forward=True, floating=False, scoring='r2', cv=5):
  4. # List of feature names
  5. feature_names = X.columns.tolist()
  6. # Splitting the dataset into training and testing sets
  7. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state)
  8. # Standardize the data (recommended for models like linear regression)
  9. scaler = StandardScaler()
  10. X_train_scaled = scaler.fit_transform(X_train)
  11. X_test_scaled = scaler.transform(X_test)
  12. # Initialize lists to store results
  13. selected_features = []
  14. r2_scores = []
  15. # Iterate over different numbers of features
  16. for k in range(1, X.shape[1] + 1): # Iterate from 1 to total number of features
  17. # Initialize the Sequential Feature Selector
  18. sfs = SFS(LinearRegression(),
  19. k_features=k,
  20. forward=forward,
  21. floating=floating,
  22. scoring=scoring, # Use specified scoring for evaluation
  23. cv=cv)
  24. # Fit the Sequential Feature Selector to the training data
  25. sfs.fit(X_train_scaled, y_train)
  26. # Transform the data to only include the selected features
  27. X_train_selected = sfs.transform(X_train_scaled)
  28. X_test_selected = sfs.transform(X_test_scaled)
  29. # Train a new model using only the selected features
  30. model = LinearRegression()
  31. model.fit(X_train_selected, y_train)
  32. # Evaluate the model on the test set using R-squared score
  33. y_pred = model.predict(X_test_selected)
  34. r2 = r2_score(y_test, y_pred)
  35. # Store results
  36. selected_features.append([feature_names[i] for i in sfs.k_feature_idx_])
  37. r2_scores.append(r2)
  38. # Create a DataFrame to store the results
  39. results_df = pd.DataFrame({
  40. 'Number of Features': list(range(1, X.shape[1] + 1)),
  41. 'Selected Features': selected_features,
  42. 'R-squared Score': r2_scores
  43. })
  44. return results_df

顺序前向选择

  1. feature_selection_with_sfs_sbs(X,y,
  2. forward = True,
  3. scoring = 'r2',
  4. cv = 0
  5. )

顺序后向选择

  1. feature_selection_with_sfs_sbs(X,y,
  2. forward = False,
  3. scoring = 'r2',
  4. cv = 0
  5. )

通过前向和后向序列特征选择,我们确定了最优特征。

总结

递归特征消除(RFE)、顺序前向选择(SFFS)、和顺序后向选择(SBFS)都表明,‘weight’、‘model year’和‘horsepower’是最重要的特征。仅使用这三个特征,我们就能获得可靠的 R² 分数0.823,与使用七个特征的基础模型相比,其 R² 分数也是0.823。(这些 R² 分数是从未在训练期间使用过的测试数据中获得的。)

这表明通过精确的特征选择,我们能够简化模型而不损失性能,从而提高模型的效率和可解释性。通过减少特征的数量,我们还能减少模型训练和预测所需的计算资源,从而在保持预测质量的同时,提高计算效率。

作者:kaiku

记录[+]

2024-07-26T10:14:34+08:00 已修改

“精简模型,提升效能:线性回归中的特征选择技巧”的评论:

还没有评论