So far we have learnt how to create a classifier and use a range of preprocessing methods and hyperparameter tuning techniques to improve our accuracy or prediction. In this lab, we shall look at some ensemble techniques which combine multiple classifiers to achieve better results.
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import sklearn
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
df = pd.read_csv('titanic.csv')
df.info()
df.head()
missing_count = df.isnull().sum()
missing_count[missing_count > 0]
df.dropna(subset=["Age"], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
df.Embarked.fillna(value=df.Embarked.mode().loc[0],inplace=True)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df.Sex = le.fit_transform(df.Sex)
le = LabelEncoder()
df.Embarked = le.fit_transform(df.Embarked)
sns.heatmap(df.corr())
cols_to_drop = [
'PassengerId',
'Name',
'Ticket',
'Cabin'
]
df = df.drop(cols_to_drop, axis=1)
df.head()
df.isnull().any().any()
X = df.drop('Survived',axis=1)
y = df['Survived']
print (X.shape, y.shape)
Split dataset into train and validation sets
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.3, random_state=1)
Create a classifier and train it.
clf = None
Generate predictions on the validation data and print the accuracy of the model on it.
y_pred = None
accuracy = None
print(accuracy)
How were the results? We will now try to use some additional techniques to improve the accuracy.
Use the BaggingClassifier from sklearn as a model, and let the base estimator be the model you previously used. Generate the new accuracy.
#BaggingClassifier
from sklearn.ensemble import BaggingClassifier
bag_clf = None
y_pred_bag = None
bag_acc = None
print(bag_acc)
The RandomForest algorithm uses bagging on decision trees. Use the RandomForestClassifier from sklearn and print its accuracy.
#RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
rf_clf = None
y_pred_rf = None
rf_acc = None
print(rf_acc)
Use adaboost classifier to generate predictions on the validation data and print the accuracy.
#AdaBoostClassifier
from sklearn.ensemble import AdaBoostClassifier
ab_clf = None
y_pred_ab = None
ab_acc = None
print(ab_acc)
Using gradient boosted decision trees from sklearn generate predictions and print the accuracy.
#GradientBoostingClassifier
from sklearn.ensemble import GradientBoostingClassifier
gb_clf = None
y_pred_gb = None
gb_acc = None
print(gb_acc)
Using the xgboost classifier, generate predictions on the validation data and print the new accuracy.
You can use the following commands to install xgboost.
conda install -c conda-forge xgboost
(Linux and OSX)
conda install -c anaconda py-xgboost
(All)
#XGBClassifier
from xgboost import XGBClassifier
xgb_clf = None
y_pred_xgb = None
xgb_acc = None
print(xgb_acc)
A few base models are used to predict the output. A meta model is trained on the outputs of these models
We'll split the training dataset into two parts equally - A & B. The base models will be trained on A. Their predictions on B will be used to train a meta model.
X_A = None
y_A = None
X_B = None
y_B = None
Train the base models on dataset A and generate predictions on dataset B
clf_1 = None
y_pred_1 = None
clf_2 = None
y_pred_2 = None
clf_3 = None
y_pred_3 = None
Create a new dataset C with predictions of base models on B
X_C = None
y_C = None
X_C.head()
Combine predictions made by base models on validation set to create a dataset D
X_D = None
y_D = None
Train a meta model on C and print its accuracy on D.
meta_clf = None
y_pred_meta = None
meta_acc = None
print(meta_acc)
Instead of just using one classifier, you can gather predictions from different classifiers, and let them 'vote' for the most appropriate label. This can be done by using sklearn's VotingClassifier.
Use a list of different classifiers and instantiate a VotingClassifier. Create 2 such classifiers, one with hard voting, and one with soft voting.
from sklearn.ensemble import VotingClassifier
estimators = None
soft_voter = None
hard_voter = None
Fit the voting classifiers, and generate the accuracies on the test data.
soft_acc = None
hard_acc = None
print("Acc of soft voting classifier:{}".format(soft_acc))
print("Acc of hard voting classifier:{}".format(hard_acc))
Apply hyperparameter tuning on the voting classifier by trying different weights for the estimators.