Adapted from IBM Cognitive Class Series
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
pd.set_option('display.max_columns', 100)
filename = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/auto.csv"
Python list headers containing name of headers
headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style",
"drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type",
"num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower",
"peak-rpm","city-mpg","highway-mpg","price"]
Use the Pandas method read_csv() to load the data from the web address. Set the parameter "names" equal to the Python list "headers".
df = pd.read_csv(filename, names = headers)
Use the method head() to display the first 10 rows of the dataframe.
df.head(n=10)
Use info() method to see basic information about the dataset
df.info()
Data Wrangling is the process of converting data from the initial format to a format that may be better for analysis.
As we can see, several question marks appeared in the dataframe; those are missing values which may hinder our further analysis. So, how do we identify all those missing values and deal with them?
Steps for working with missing data:
In the car dataset, missing data comes with the question mark "?". We replace "?" with NaN (Not a Number), which is Python's default missing value marker, for reasons of computational speed and convenience. Here we use the function:
.replace(A, B, inplace = True)to replace A by B
# replace "?" with NaN
df.replace("?", np.nan, inplace = True)
df.head(5)
The missing values are converted to Python's default. We use Python's built-in functions to identify these missing values. There are two methods to detect missing data:
df.isnull().head(5)
"True" stands for missing value, while "False" stands for not missing value.
missing_count = df.isnull().sum()
missing_count[missing_count > 0]
Based on the summary above, each column has 205 rows of data, seven columns containing missing data:
How to deal with missing data?
Whole columns should be dropped only if most entries in the column are empty. In our dataset, none of the columns are empty enough to drop entirely. We have some freedom in choosing which method to replace data; however, some methods may seem more reasonable than others. We will apply each method to many different columns:
Replace by mean:
Replace by frequency:
Drop the whole row:
# Calculate mean for column normalized-losses
# df["normalized-losses"].mean()
In Pandas, we use
.dtype() to check the data type
.astype() to change the data type
Lets list the data types and number of unique values for each column
df_dtype_nunique = pd.concat([df.dtypes, df.nunique()],axis=1)
df_dtype_nunique.columns = ["dtype","unique"]
df_dtype_nunique
As we can see above, some columns are not of the correct data type. Numerical variables should have type 'float' or 'int', and variables with strings such as categories should have type 'object'. For example, 'bore' and 'stroke' variables are numerical values that describe the engines, so we should expect them to be of the type 'float' or 'int'; however, they are shown as type 'object'. We have to convert data types into a proper format for each column using the "astype()" method.
df.head()
numerical_features = ["normalized-losses","stroke","bore","horsepower","peak-rpm","price"]
df[numerical_features] = df[numerical_features].astype("float")
df.dtypes
Let's drop all rows that do not have price data:
# simply drop whole row with NaN in "price" column
df.dropna(subset=["price"], axis=0, inplace=True)
# reset index, because we droped two rows
df.reset_index(drop=True, inplace=True)
avg_norm_loss = df["normalized-losses"].mean()
print("Average of normalized-losses:", avg_norm_loss)
df["normalized-losses"].fillna(value=avg_norm_loss, inplace=True)
# OR
# df["normalized-losses"].replace(np.nan, avg_norm_loss, inplace=True)
df.fillna(value=df.mean(),inplace=True)
To see which values are present in a particular column, we can use the ".value_counts()" method:
df['num-of-doors'].value_counts()
We can see that four doors are the most common type. We can also use the ".idxmax()" method to calculate for us the most common type automatically:
df['num-of-doors'].value_counts().idxmax()
The replacement procedure is very similar to what we have seen previously
#replace the missing 'num-of-doors' values by the most frequent
df["num-of-doors"].replace(np.nan, "four", inplace=True)
df.head()
df.isnull().any().any()
Good! Now, we obtain the dataset with no missing values.
Let's first take a look at the variables by utilizing a description method.
The describe function automatically computes basic statistics for all continuous variables. Any NaN values are automatically skipped in these statistics.
This will show:
We can apply the method "describe" as follows:
df.describe()
The default setting of "describe" skips variables of type object. We can apply the method "describe" on the variables of type 'object' as follows:
df.describe(include='object')
The "groupby" method groups data by different categories. The data is grouped based on one or several variables and analysis is performed on the individual groups.
For example, let's group by the variable "drive-wheels". We see that there are 3 different categories of drive wheels.
df['drive-wheels'].unique()
If we want to know, on average, which type of drive wheel is most valuable, we can group "drive-wheels" and then average them.
We can select the columns 'drive-wheels', 'body-style' and 'price', then assign it to the variable "df_group_one".
df_group = df[['drive-wheels','body-style','price']]
We can then calculate the average price for each of the different categories of data.
# Use groupby to calculate average price for each category of drive-wheels
grouped_test1 = df_group.groupby(['drive-wheels'],as_index=False).mean()
grouped_test1
From our data, it seems rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel and front-wheel are approximately the same in price.
You can also group with multiple variables. For example, let's group by both 'drive-wheels' and 'body-style'. This groups the dataframe by the unique combinations 'drive-wheels' and 'body-style'. We can store the results in the variable 'grouped_test1'.
# Use groupby to calculate average price for each unique combination of category of drive-wheels
grouped_test2 = df_group.groupby(['drive-wheels','body-style'],as_index=False).mean()
grouped_test2
This grouped data is much easier to visualize when it is made into a pivot table. A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. We can convert the dataframe to a pivot table using the method "pivot " to create a pivot table from the groups.
In this case, we will leave the drive-wheel variable as the rows of the table, and pivot body-style to become the columns of the table:
grouped_pivot = grouped_test2.pivot(index='drive-wheels',columns='body-style')
grouped_pivot
Often, we won't have data for some of the pivot cells. We can fill these missing cells with the value 0, but any other value could potentially be used as well. It should be mentioned that missing data is quite a complex subject and is an entire course on its own.
grouped_pivot = grouped_pivot.fillna(0) #fill missing values with 0
grouped_pivot
When visualizing individual variables, it is important to first understand what type of variable you are dealing with. This will help us find the right visualization method for that variable.
# List the data types for each column
print(df.dtypes)
Continuous numerical variables are variables that may contain any value within some range. Continuous numerical variables can have the type "int64" or "float64". A great way to visualize these variables is by using scatterplots with fitted lines.
In order to start understanding the (linear) relationship between an individual variable and the price. We can do this by using "regplot", which plots the scatterplot plus the fitted regression line for the data.
Let's see several examples of different linear relationships:
Positive linear relationship
Let's find the scatterplot of "engine-size" and "price"
# Engine size as potential predictor variable of price
sns.regplot(x="engine-size", y="price", data=df)
As the engine-size goes up, the price goes up: this indicates a positive direct correlation between these two variables. Engine size seems like a pretty good predictor of price since the regression line is almost a perfect diagonal line.
We can examine the correlation between 'engine-size' and 'price' and see it's approximately 0.86
df[["engine-size", "price"]].corr()
Highway mpg is a potential predictor variable of price
sns.regplot(x="highway-mpg", y="price", data=df)
As the highway-mpg goes up, the price goes down: this indicates an inverse/negative relationship between these two variables. Highway mpg could potentially be a predictor of price.
We can examine the correlation between 'highway-mpg' and 'price' and see it's approximately -0.7
df[['highway-mpg', 'price']].corr()
Weak Linear Relationship
Let's see if "Peak-rpm" as a predictor variable of "price".
sns.regplot(x="peak-rpm", y="price", data=df)
Peak rpm does not seem like a good predictor of the price at all since the regression line is close to horizontal. Also, the data points are very scattered and far from the fitted line, showing lots of variability. Therefore it's it is not a reliable variable.
We can examine the correlation between 'peak-rpm' and 'price' and see it's approximately -0.1
df[['peak-rpm','price']].corr()
# Find the correlation between x="stroke", y="price"
df[["stroke","price"]].corr()
# Given the correlation results between "price" and "stroke" do you expect a linear relationship?
# Verify your results using the function "regplot()".
sns.regplot(x="stroke", y="price", data=df)
These are variables that describe a 'characteristic' of a data unit, and are selected from a small group of categories. The categorical variables can have the type "object" or "int64". A good way to visualize categorical variables is by using boxplots.
Let's look at the relationship between "body-style" and "price".
sns.boxplot(x="body-style", y="price", data=df)
We see that the distributions of price between the different body-style categories have a significant overlap, and so body-style would not be a good predictor of price. Let's examine engine "engine-location" and "price":
sns.boxplot(x="engine-location", y="price", data=df)
Here we see that the distribution of price between these two engine-location categories, front and rear, are distinct enough to take engine-location as a potential good predictor of price.
Let's examine "drive-wheels" and "price".
# drive-wheels
sns.boxplot(x="drive-wheels", y="price", data=df)
Here we see that the distribution of price between the different drive-wheels categories differs; as such drive-wheels could potentially be a predictor of price.
Correlation: a measure of the extent of interdependence between variables.
Causation: the relationship between cause and effect between two variables.
It is important to know the difference between these two and that correlation does not imply causation. Determining correlation is much simpler the determining causation as causation may require independent experimentation.
We can calculate the correlation between variables of type "int64" or "float64" using the method "corr":
df.corr()
The diagonal elements are always one
# Compute the correlation matrix
corr = df.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(12, 9))
# Generate a custom diverging colormap
# cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
plt.show()
We now have a better idea of what our data looks like and which variables are important to take into account when predicting the car price. We have narrowed it down to the following variables:
Continuous numerical variables:
Categorical variables:
As we now move into building machine learning models to automate our analysis, feeding the model with variables that meaningfully affect our target variable will improve our model's prediction performance.
X = df[["length","width","curb-weight","engine-size","horsepower","city-mpg","highway-mpg","wheel-base","bore","drive-wheels","engine-location"]].copy()
y = df["price"].copy()
X.head()
numerical_features = ["length","width","curb-weight","engine-size","horsepower","city-mpg","highway-mpg","wheel-base","bore"]
categorical_features = ["drive-wheels","engine-location"]
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_scaled = scaler.fit_transform(X[numerical_features])
X_scaled
X_encoded = pd.get_dummies(X[categorical_features])
X_encoded.head()
X_new = np.concatenate([X_scaled,X_encoded.values],axis=1)
X_new
from sklearn.model_selection import train_test_split
X_train,X_val,y_train,y_val = train_test_split(X_new,y,test_size=0.33,random_state=42)
from sklearn.linear_model import LinearRegression
reg_lr = LinearRegression().fit(X_train,y_train)
from sklearn.metrics import mean_absolute_error
y_pred_lr = reg_lr.predict(X_val)
mae_lr = mean_absolute_error(y_pred_lr,y_val)
print("Mean Absolute Error of Linear Regression: {}".format(mae_lr))