Ensemble Methods

So far we have learnt how to create a classifier and use a range of preprocessing methods and hyperparameter tuning techniques to improve our accuracy or prediction. In this lab, we shall look at some ensemble techniques which combine multiple classifiers to achieve better results.

  • Bagging
  • Boosting
  • Stacking
In [1]:
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import sklearn
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns

Loading Dataset

In [2]:
df = pd.read_csv('titanic.csv')
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId    891 non-null int64
Survived       891 non-null int64
Pclass         891 non-null int64
Name           891 non-null object
Sex            891 non-null object
Age            714 non-null float64
SibSp          891 non-null int64
Parch          891 non-null int64
Ticket         891 non-null object
Fare           891 non-null float64
Cabin          204 non-null object
Embarked       889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
In [3]:
df.head()
Out[3]:
PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry male 35.0 0 0 373450 8.0500 NaN S
In [4]:
missing_count = df.isnull().sum()
missing_count[missing_count > 0]
Out[4]:
Age         177
Cabin       687
Embarked      2
dtype: int64
In [5]:
df.dropna(subset=["Age"], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
In [6]:
df.Embarked.fillna(value=df.Embarked.mode().loc[0],inplace=True)
In [7]:
from sklearn.preprocessing import LabelEncoder

le = LabelEncoder()
df.Sex = le.fit_transform(df.Sex)

le = LabelEncoder()
df.Embarked = le.fit_transform(df.Embarked)
In [8]:
sns.heatmap(df.corr())
Out[8]:
<matplotlib.axes._subplots.AxesSubplot at 0x28e1ff694e0>
In [9]:
cols_to_drop = [
    'PassengerId',
    'Name',
    'Ticket',
    'Cabin'
]

df = df.drop(cols_to_drop, axis=1)
df.head()
Out[9]:
Survived Pclass Sex Age SibSp Parch Fare Embarked
0 0 3 1 22.0 1 0 7.2500 2
1 1 1 0 38.0 1 0 71.2833 0
2 1 3 0 26.0 0 0 7.9250 2
3 1 1 0 35.0 1 0 53.1000 2
4 0 3 1 35.0 0 0 8.0500 2
In [10]:
df.isnull().any().any()
Out[10]:
False
In [11]:
X = df.drop('Survived',axis=1)
y = df['Survived']

print (X.shape, y.shape)
(714, 7) (714,)

Splitting the dataset

Split dataset into train and validation sets

In [12]:
from sklearn.model_selection import train_test_split

X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.3, random_state=1)

Model Creation and Evaluation

Create a classifier and train it.

In [ ]:
clf = None

Generate predictions on the validation data and print the accuracy of the model on it.

In [ ]:
y_pred = None
accuracy = None

print(accuracy)

How were the results? We will now try to use some additional techniques to improve the accuracy.

Bagging

Use the BaggingClassifier from sklearn as a model, and let the base estimator be the model you previously used. Generate the new accuracy.

In [ ]:
#BaggingClassifier
from sklearn.ensemble import BaggingClassifier

bag_clf = None
y_pred_bag = None
bag_acc = None

print(bag_acc)

The RandomForest algorithm uses bagging on decision trees. Use the RandomForestClassifier from sklearn and print its accuracy.

In [ ]:
#RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier

rf_clf = None
y_pred_rf = None
rf_acc = None

print(rf_acc)

Boosting

Weight-based

Use adaboost classifier to generate predictions on the validation data and print the accuracy.

In [ ]:
#AdaBoostClassifier
from sklearn.ensemble import AdaBoostClassifier

ab_clf = None
y_pred_ab = None
ab_acc = None

print(ab_acc)

Residual-based

Using gradient boosted decision trees from sklearn generate predictions and print the accuracy.

In [ ]:
#GradientBoostingClassifier
from sklearn.ensemble import GradientBoostingClassifier

gb_clf = None
y_pred_gb = None
gb_acc = None

print(gb_acc)

Using the xgboost classifier, generate predictions on the validation data and print the new accuracy.

You can use the following commands to install xgboost.

conda install -c conda-forge xgboost (Linux and OSX)

conda install -c anaconda py-xgboost (All)

In [ ]:
#XGBClassifier
from xgboost import XGBClassifier

xgb_clf = None
y_pred_xgb = None
xgb_acc = None

print(xgb_acc)

Stacking

A few base models are used to predict the output. A meta model is trained on the outputs of these models

We'll split the training dataset into two parts equally - A & B. The base models will be trained on A. Their predictions on B will be used to train a meta model.

In [ ]:
X_A = None
y_A = None
X_B = None
y_B = None

Train the base models on dataset A and generate predictions on dataset B

In [ ]:
clf_1 = None
y_pred_1 = None
clf_2 = None
y_pred_2 = None
clf_3 = None
y_pred_3 = None

Create a new dataset C with predictions of base models on B

In [ ]:
X_C = None
y_C = None

X_C.head()

Combine predictions made by base models on validation set to create a dataset D

In [ ]:
X_D = None
y_D = None

Train a meta model on C and print its accuracy on D.

In [ ]:
meta_clf = None
y_pred_meta = None
meta_acc = None

print(meta_acc)

Majority Voting Techniques

Instead of just using one classifier, you can gather predictions from different classifiers, and let them 'vote' for the most appropriate label. This can be done by using sklearn's VotingClassifier.

Use a list of different classifiers and instantiate a VotingClassifier. Create 2 such classifiers, one with hard voting, and one with soft voting.

In [ ]:
from sklearn.ensemble import VotingClassifier

estimators = None

soft_voter = None
hard_voter = None

Fit the voting classifiers, and generate the accuracies on the test data.

In [ ]:
soft_acc = None
hard_acc = None

print("Acc of soft voting classifier:{}".format(soft_acc))
print("Acc of hard voting classifier:{}".format(hard_acc))

Apply hyperparameter tuning on the voting classifier by trying different weights for the estimators.

In [ ]:
 
In [ ]: