Table of Contents

Data Description

The datasets contains transactions made by credit cards in September 2013 by european cardholders.

This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions.

The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.

It contains only numerical input variables which are the result of a PCA transformation.

Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data.

Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'.

Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-senstive learning.

Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.

Business Problem

Business Problem:
Task    : Detect the fraudulent activities.
Metric : Recall
Question: How many frauds are correctly classified?
Method Used: Vaex ( which works for big data ~ 1 billion rows)

Here, in this notebook I use the big data analysiz tool called vaex (pronounced as VEX). This is acronym of visualizatoin and exploration library mainly created to visualize GAIA Telescopic data visualization but later further developed to include dataframe and some machine leaning models.

Vaex uses memory mapping, and stunningly fast. One thing I like about vaex is that it records all the states (whatever actions we did to the dataframe) and we can use the same state to TEST data for machine learning modelling.

As like Spark, vaex dataframes are immutable. In vaex we create new virtual columns using some expressions. Those expressions can be any mathematical operations such as np.sqrt and so on.

Like tensorflow uses compuatation, vaex uses mmap and does lazy operations, this means evaluation is done only if necessary.

Now, without a further ado, let's do some data science with it.

Imports

Useful Scripts

Load the data

Data Processing

create virtual columns

Categorize features

EDA

Correlations

Scatter plots

Barplots

Modelling

Train Test Split

Modelling LightGBM using Vaex

Predictions

Model Performances