So far we have learnt how to create a classifier and use a range of preprocessing methods and hyperparameter tuning techniques to improve our accuracy or prediction. In this lab, we shall look at some ensemble techniques which combine multiple classifiers to achieve better results.
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import sklearn
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
df = pd.read_csv('titanic.csv')
df.info()
df.head()
missing_count = df.isnull().sum()
missing_count[missing_count > 0]
df.dropna(subset=["Age"], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
df.Embarked.fillna(value=df.Embarked.mode().loc[0],inplace=True)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df.Sex = le.fit_transform(df.Sex)
le = LabelEncoder()
df.Embarked = le.fit_transform(df.Embarked)
sns.heatmap(df.corr())
cols_to_drop = [
'PassengerId',
'Name',
'Ticket',
'Cabin'
]
df = df.drop(cols_to_drop, axis=1)
df.head()
df.isnull().any().any()
X = df.drop('Survived',axis=1)
y = df['Survived']
print (X.shape, y.shape)
Split dataset into train and validation sets
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.3, random_state=1)
Create a classifier and train it.
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier().fit(X_train,y_train)
Generate predictions on the validation data and print the accuracy of the model on it.
from sklearn.metrics import accuracy_score
y_pred = clf.predict(X_val)
accuracy = accuracy_score(y_val,y_pred)
print(accuracy)
How were the results? We will now try to use some additional techniques to improve the accuracy.
Use the BaggingClassifier from sklearn as a model, and let the base estimator be the model you previously used. Generate the new accuracy.
#BaggingClassifier
from sklearn.ensemble import BaggingClassifier
bag_clf = BaggingClassifier(base_estimator=DecisionTreeClassifier()).fit(X_train,y_train)
y_pred_bag = bag_clf.predict(X_val)
bag_acc = accuracy_score(y_val,y_pred_bag)
print(bag_acc)
The RandomForest algorithm uses bagging on decision trees. Use the RandomForestClassifier from sklearn and print its accuracy.
#RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=100).fit(X_train,y_train)
y_pred_rf = rf_clf.predict(X_val)
rf_acc = accuracy_score(y_val,y_pred_rf)
print(rf_acc)
Use adaboost classifier to generate predictions on the validation data and print the accuracy.
#AdaBoostClassifier
from sklearn.ensemble import AdaBoostClassifier
ab_clf = AdaBoostClassifier().fit(X_train,y_train)
y_pred_ab = ab_clf.predict(X_val)
ab_acc = accuracy_score(y_val,y_pred_ab)
print(ab_acc)
Using gradient boosted decision trees from sklearn generate predictions and print the accuracy.
#GradientBoostingClassifier
from sklearn.ensemble import GradientBoostingClassifier
gb_clf = GradientBoostingClassifier().fit(X_train,y_train)
y_pred_gb = gb_clf.predict(X_val)
gb_acc = accuracy_score(y_val,y_pred_gb)
print(gb_acc)
Using the xgboost classifier, generate predictions on the validation data and print the new accuracy.
You can use the following commands to install xgboost.
conda install -c conda-forge xgboost
(Linux and OSX)
conda install -c anaconda py-xgboost
(All)
#XGBClassifier
from xgboost import XGBClassifier
xgb_clf = XGBClassifier().fit(X_train,y_train)
y_pred_xgb = xgb_clf.predict(X_val)
xgb_acc = accuracy_score(y_val,y_pred_xgb)
print(xgb_acc)
We'll split the training dataset into two parts - A & B. The base models will be trained on A. Their predictions on B will be used to train a meta model.
n = len(X_train)
X_A = X_train[:n//2]
y_A = y_train[:n//2]
X_B = X_train[n//2:]
y_B = y_train[n//2:]
Train the base models on dataset A and generate predictions on dataset B
clf_1 = DecisionTreeClassifier().fit(X_A, y_A)
y_pred_1 = clf_1.predict(X_B)
clf_2 = RandomForestClassifier(n_estimators=100).fit(X_A, y_A)
y_pred_2 = clf_2.predict(X_B)
clf_3 = GradientBoostingClassifier().fit(X_A, y_A)
y_pred_3 = clf_3.predict(X_B)
Create a new dataset C with predictions of base models on B
X_C = pd.DataFrame({'RandomForest': y_pred_2, 'DeccisionTrees': y_pred_1, 'GradientBoost': y_pred_3})
y_C = y_B
X_C.head()
Combine predictions made by base models on validation set to create a dataset D
X_D = pd.DataFrame({'RandomForest': clf_2.predict(X_val), 'DeccisionTrees': clf_1.predict(X_val), 'GradientBoost': clf_3.predict(X_val)})
y_D = y_val
Train a meta model on C and print its accuracy on D.
from xgboost import XGBClassifier
xgb_clf = XGBClassifier().fit(X_C,y_C)
y_pred_xgb = xgb_clf.predict(X_D)
xgb_acc = accuracy_score(y_D,y_pred_xgb)
print(xgb_acc)
Instead of just using one classifier, you can gather predictions from different classifiers, and let them 'vote' for the most appropriate label. This can be done by using sklearn's VotingClassifier.
Use a list of different classifiers and instantiate a VotingClassifier. Create 2 such classifiers, one with hard voting, and one with soft voting.
from sklearn.ensemble import VotingClassifier
estimators = [('rf', RandomForestClassifier()), ('bag', BaggingClassifier()), ('xgb', XGBClassifier())]
soft_voter = VotingClassifier(estimators=estimators, voting='soft').fit(X_train,y_train)
hard_voter = VotingClassifier(estimators=estimators, voting='hard').fit(X_train,y_train)
Fit the voting classifiers, and generate the accuracies on the test data.
soft_acc = accuracy_score(y_val,soft_voter.predict(X_val))
hard_acc = accuracy_score(y_val,hard_voter.predict(X_val))
print("Acc of soft voting classifier:{}".format(soft_acc))
print("Acc of hard voting classifier:{}".format(hard_acc))
Apply hyperparameter tuning on the voting classifier by trying different weights for the estimators.
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
parameters = {'weights':[[1,1,1],[1,1,2],[1,2,1],[2,1,1],[1,2,2],[2,1,2],[2,2,1]]} #Dictionary of parameters
scorer = make_scorer(accuracy_score) #Initialize the scorer using make_scorer
grid_obj = GridSearchCV(VotingClassifier(estimators=estimators, voting='soft'),parameters,scoring=scorer) #Initialize a GridSearchCV object with above parameters,scorer and classifier
grid_fit = grid_obj.fit(X_train,y_train) #Fit the gridsearch object with X_train,y_train
best_clf_sv = grid_fit.best_estimator_ #Get the best estimator. For this, check documentation of GridSearchCV object
unoptimized_predictions = (VotingClassifier(estimators=estimators, voting='soft').fit(X_train, y_train)).predict(X_val) #Using the unoptimized classifiers, generate predictions
optimized_predictions = best_clf_sv.predict(X_val) #Same, but use the best estimator
acc_unop = accuracy_score(y_val, unoptimized_predictions)*100 #Calculate accuracy for unoptimized model
acc_op = accuracy_score(y_val, optimized_predictions)*100 #Calculate accuracy for optimized model
print("Accuracy score on unoptimized model:{}".format(acc_unop))
print("Accuracy score on optimized model:{}".format(acc_op))