Data Analysis With Pyspark Dataframe

Install Pyspark

!pip install pyspark

In [1]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
In [2]:
import pyspark
from pyspark.rdd import RDD
from pyspark.sql import Row
from pyspark.sql import DataFrame
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
from pyspark.sql import functions
from pyspark.sql.functions import lit, desc, col, size, array_contains\
, isnan, udf, hour, array_min, array_max, countDistinct
from pyspark.sql.types import *

from pyspark.ml  import Pipeline     
from pyspark.sql.functions import mean,col,split, col, regexp_extract, when, lit

Pyspark Example

For this exercise, I will use the purchase data. Let us take a look at this data using unix head command. We can run unix commands in Python Jupyter notebook using ! in front of every command.

In [3]:
!head -1 purchases.csv
12-29	11:06	Fort Wayne	Sporting Goods	199.82	Cash

Firstly, We need to create a spark container by calling SparkSession. This step is necessary before doing anything

In [4]:
from pyspark.sql import SparkSession
from pyspark.sql.types import *

#create session in order to be capable of accessing all Spark API
spark = SparkSession \
    .builder \
    .appName("Purchase") \
    .config("spark.some.config.option", "some-value") \
    .getOrCreate()
In [5]:
#define data schema for file we want to read
purchaseSchema = StructType([
    StructField("Date", DateType(), True),
    StructField("Time", StringType(), True),
    StructField("City", StringType(), True),
    StructField("Item", StringType(), True),
    StructField("Total", FloatType(), True),
    StructField("Payment", StringType(), True),
])    

Pyspark read csv

In [6]:
# read csv file with our defined schema into Spark DataFrame, and use "tab" delimiter
purchaseDataframe = spark.read.csv(
    "purchases.csv", 
    header=True, schema=purchaseSchema, sep="\t")
#show 3 rows of our DataFrame
purchaseDataframe.show(3)
+----------+-----+---------+----------------+------+--------+
|      Date| Time|     City|            Item| Total| Payment|
+----------+-----+---------+----------------+------+--------+
|2012-12-29|11:06| New York|            Baby|290.14|Discover|
|2012-12-29|11:06|San Diego|            DVDs|150.97|Discover|
|2012-12-29|11:06|  Chicago|Women's Clothing|427.42|    Amex|
+----------+-----+---------+----------------+------+--------+
only showing top 3 rows

Pyspark count number of rows

In [7]:
#count number of rows of our dataFrame
num_rows = purchaseDataframe.count()
print("number of rows: ", num_rows)
number of rows:  31273

Pyspark print schema

In [8]:
#show our dataFrame schema
purchaseDataframe.printSchema()
root
 |-- Date: date (nullable = true)
 |-- Time: string (nullable = true)
 |-- City: string (nullable = true)
 |-- Item: string (nullable = true)
 |-- Total: float (nullable = true)
 |-- Payment: string (nullable = true)

Pyspark data stats

In [9]:
#show statistic of the data we want
purchaseDataframe.describe('Total').show()
+-------+------------------+
|summary|             Total|
+-------+------------------+
|  count|             31273|
|   mean|249.23653885721387|
| stddev|144.33006767009587|
|    min|               0.0|
|    max|            499.98|
+-------+------------------+

Pyspark Distinct

Find number of unique values. Find number of unique city names.

In [10]:
purchaseDataframe.select('City').distinct().count()
Out[10]:
103

Creating a new dataFrame from a subset of existing dataFrame

In [11]:
#create new dataFrame from "City" and "Total" columns
newDataframe = purchaseDataframe.select(purchaseDataframe['City'], 
                                              purchaseDataframe['Total'])

# top 10 rows 
newDataframe.show(5); 

print('=========================')
# schema of dataframe
newDataframe.printSchema() 
+--------------+------+
|          City| Total|
+--------------+------+
|      New York|290.14|
|     San Diego|150.97|
|       Chicago|427.42|
|       Atlanta|108.53|
|St. Petersburg|288.25|
+--------------+------+
only showing top 5 rows

=========================
root
 |-- City: string (nullable = true)
 |-- Total: float (nullable = true)

Pyspark Filtering dataFrame

In [12]:
#filter only row data whose "Total" column value > 300
purchaseDataframe.filter(purchaseDataframe['Total'] > 300).show(5)
+----------+-----+-------+----------------+------+-------+
|      Date| Time|   City|            Item| Total|Payment|
+----------+-----+-------+----------------+------+-------+
|2012-12-29|11:06|Chicago|Women's Clothing|427.42|   Amex|
|2012-12-29|11:06|Memphis|         Cameras| 407.8|   Visa|
|2012-12-29|11:06|Houston|            Toys|317.65|   Amex|
|2012-12-29|11:06|Memphis|    Pet Supplies|331.05|   Amex|
|2012-12-29|11:07|Lubbock|    Pet Supplies|421.28|   Cash|
+----------+-----+-------+----------------+------+-------+
only showing top 5 rows

Pyspark Sorting dataFrame by certain column

In [13]:
# sorting dataframe by city 
sortedByCity = purchaseDataframe.orderBy('City').show(10)
+----------+-----+-----------+-----------------+------+----------+
|      Date| Time|       City|             Item| Total|   Payment|
+----------+-----+-----------+-----------------+------+----------+
|2012-12-29|11:35|Albuquerque|            Music|191.12|  Discover|
|2012-12-29|12:03|Albuquerque|             Toys|192.16|      Amex|
|2012-12-29|11:15|Albuquerque|            Music|135.52|      Amex|
|2012-12-29|11:48|Albuquerque|             Toys|311.15|      Cash|
|2012-12-29|11:17|Albuquerque|              CDs|454.33|MasterCard|
|2012-12-29|11:31|Albuquerque|      Video Games| 245.6|      Amex|
|2012-12-29|11:39|Albuquerque|            Music|364.49|  Discover|
|2012-12-29|11:23|Albuquerque|Health and Beauty|318.91|      Cash|
|2012-12-29|11:41|Albuquerque|           Crafts|253.45|      Amex|
|2012-12-29|11:17|Albuquerque|   Sporting Goods|456.92|      Amex|
+----------+-----+-----------+-----------------+------+----------+
only showing top 10 rows

Pyspark groupby

Calculating number of transactions in each city...

In [14]:
numTransactionEachCity = purchaseDataframe.groupBy("City").count()
numTransactionEachCity.show(5)
+---------------+-----+
|           City|count|
+---------------+-----+
|North Las Vegas|  273|
|        Phoenix|  328|
|          Omaha|  334|
|      Anchorage|  312|
|        Anaheim|  308|
+---------------+-----+
only showing top 5 rows

Indexing and Accessing in Pyspark DataFrame

Since Spark dataFrame is distributed into clusters, we cannot access it by [row,column] as we can do in pandas dataFrame for example. There is an alternative way to do that in Pyspark by creating new column "index". Then, we can use ".filter()" function on our "index" column.

In [15]:
#import monotonically_increasing_id
from pyspark.sql.functions import monotonically_increasing_id

newPurchasedDataframe = purchaseDataframe.withColumn(
    "index", monotonically_increasing_id())
newPurchasedDataframe.show(10)

row2Till4 = newPurchasedDataframe.filter((newPurchasedDataframe['index']>=2) &
                                         (newPurchasedDataframe['index']<=4))
row2Till4.show()
+----------+-----+---------------+----------------+------+--------+-----+
|      Date| Time|           City|            Item| Total| Payment|index|
+----------+-----+---------------+----------------+------+--------+-----+
|2012-12-29|11:06|       New York|            Baby|290.14|Discover|    0|
|2012-12-29|11:06|      San Diego|            DVDs|150.97|Discover|    1|
|2012-12-29|11:06|        Chicago|Women's Clothing|427.42|    Amex|    2|
|2012-12-29|11:06|        Atlanta|            Toys|108.53|    Visa|    3|
|2012-12-29|11:06| St. Petersburg|            Toys|288.25|Discover|    4|
|2012-12-29|11:06|      Henderson|           Books|186.31|Discover|    5|
|2012-12-29|11:06|North Las Vegas|       Computers| 60.47|Discover|    6|
|2012-12-29|11:06|          Boise|            Toys|232.99|Discover|    7|
|2012-12-29|11:06|        Lincoln|  Men's Clothing|190.04|Discover|    8|
|2012-12-29|11:06|    New Orleans|    Pet Supplies|219.07|    Amex|    9|
+----------+-----+---------------+----------------+------+--------+-----+
only showing top 10 rows

+----------+-----+--------------+----------------+------+--------+-----+
|      Date| Time|          City|            Item| Total| Payment|index|
+----------+-----+--------------+----------------+------+--------+-----+
|2012-12-29|11:06|       Chicago|Women's Clothing|427.42|    Amex|    2|
|2012-12-29|11:06|       Atlanta|            Toys|108.53|    Visa|    3|
|2012-12-29|11:06|St. Petersburg|            Toys|288.25|Discover|    4|
+----------+-----+--------------+----------------+------+--------+-----+

Then, to access it by row and column, use ".select()" function we ever used above before.

In [16]:
#particular column value

dataRow2ColumnTotal = newPurchasedDataframe.filter(newPurchasedDataframe['index']==2).select('Total')
dataRow2ColumnTotal.show()
+------+
| Total|
+------+
|427.42|
+------+

In [17]:
purchaseDataframe.filter(purchaseDataframe.City.isNull()).show()
+----+----+----+----+-----+-------+
|Date|Time|City|Item|Total|Payment|
+----+----+----+----+-----+-------+
+----+----+----+----+-----+-------+

Handle duplicated data with Pyspark

Below snippet shows how to drop duplicate rows and also how to count duplicate rows in Pyspark

In [18]:
#count the number of original data rows
n1 =purchaseDataframe.count()
print("number of original data rows: ", n1)
#count the number of data rows after deleting duplicated data
n2 = purchaseDataframe.dropDuplicates().count()
print("number of data rows after deleting duplicated data: ", n2)
n3 = n1 - n2
print("number of duplicate rows: ", n3)
number of original data rows:  31273
number of data rows after deleting duplicated data:  31273
number of duplicate rows:  0

Handle missing data with Pyspark

Delete row if there is at least one (column) missing data.

In [19]:
PurchaseNoMissingValue = purchaseDataframe.dropDuplicates().dropna(
    how="any")# use how="all" for missing data in the entire column
numberOfMissingValueAny = n1 - PurchaseNoMissingValue.count()
print("number of rows with missing data: ", numberOfMissingValueAny)
number of rows with missing data:  0
In [20]:
purchaseDataframe.show(5)
+----------+-----+--------------+----------------+------+--------+
|      Date| Time|          City|            Item| Total| Payment|
+----------+-----+--------------+----------------+------+--------+
|2012-12-29|11:06|      New York|            Baby|290.14|Discover|
|2012-12-29|11:06|     San Diego|            DVDs|150.97|Discover|
|2012-12-29|11:06|       Chicago|Women's Clothing|427.42|    Amex|
|2012-12-29|11:06|       Atlanta|            Toys|108.53|    Visa|
|2012-12-29|11:06|St. Petersburg|            Toys|288.25|Discover|
+----------+-----+--------------+----------------+------+--------+
only showing top 5 rows

Pyspark calculate column mean

In [21]:
meanTotal = purchaseDataframe.groupBy().avg("Total").take(1)[0][0]
print('Mean total:',meanTotal)
Mean total: 249.23653885721387