PySpark concat_ws()

split(str) function is used to convert a string column into an array of strings using a delimiter for the split. concat_ws() is the opposite of split. It creates a string column from an array of strings. The resulting array is concatenated with the provided delimiter.

pyspark functions used in this notebook are
rdd.createOrReplaceTempView, rdd.drop, spark.sql

First we load the important libraries

In [1]:
from pyspark.sql import SparkSession
from pyspark.sql.functions import (col, concat_ws, split)
In [3]:
# initializing spark session instance
spark = SparkSession.builder.appName('pyspark concat snippets').getOrCreate()

Then load our initial records

In [4]:
columns = ["Full_Name","Salary"]
data = [("Sam A Smith", 1000), ("Alex Wesley Jones", 120000), ("Steve Paul Jobs", 5000)]
In [5]:
# converting data to rdds
rdd = spark.sparkContext.parallelize(data)
In [6]:
# Then creating a dataframe from our rdd variable
dfFromRDD2 = spark.createDataFrame(rdd).toDF(*columns)
In [7]:
# visualizing current data before manipulation
dfFromRDD2.show()
+-----------------+------+
|        Full_Name|Salary|
+-----------------+------+
|      Sam A Smith|  1000|
|Alex Wesley Jones|120000|
|  Steve Paul Jobs|  5000|
+-----------------+------+

1) Here we are splitting the Full_Name Column containing first name, middle name and last name and adding a new column called Name_Parts

In [8]:
# here we add a new column called 'Name_Parts' and use space ' ' as the delimiter string
modified_dfFromRDD2 = dfFromRDD2.withColumn("Name_Parts", split(col('Full_Name'), ' '))
In [9]:
# visualizing the modified dataframe 
modified_dfFromRDD2.show()
+-----------------+------+--------------------+
|        Full_Name|Salary|          Name_Parts|
+-----------------+------+--------------------+
|      Sam A Smith|  1000|     [Sam, A, Smith]|
|Alex Wesley Jones|120000|[Alex, Wesley, Jo...|
|  Steve Paul Jobs|  5000| [Steve, Paul, Jobs]|
+-----------------+------+--------------------+

2) We can also use a SQL query to split the Full_Name column. For this, we need to use createOrReplaceTempView() to create a create a temporary view from the Dataframe. This view can be accessed till SparkContaxt is active.

In [10]:
# Below we use the SQL query to select the required columns. This includes the new column we create
# by splitting the Full_Name column. 
dfFromRDD2.createOrReplaceTempView("SalaryData")
modified_dfFromRDD3 = spark.sql("select Full_Name, Salary, SPLIT(Full_Name,' ') as Name_Parts from SalaryData")
In [11]:
# visualizing the modified dataframe after executing the SQL query.
# As you can see, it is exactly the same as the previous output.
modified_dfFromRDD3.show(truncate=False)
+-----------------+------+---------------------+
|Full_Name        |Salary|Name_Parts           |
+-----------------+------+---------------------+
|Sam A Smith      |1000  |[Sam, A, Smith]      |
|Alex Wesley Jones|120000|[Alex, Wesley, Jones]|
|Steve Paul Jobs  |5000  |[Steve, Paul, Jobs]  |
+-----------------+------+---------------------+

Now we will use the above data frame for concat_ws function but will drop the Full_Name column. We will be recreating it using the concatenation operation

In [12]:
# Removing the Full_Name column using the drop function
modified_dfFromRDD4 = modified_dfFromRDD3.drop('Full_Name')
In [13]:
# visualizing the modified data frame
modified_dfFromRDD4.show()
+------+--------------------+
|Salary|          Name_Parts|
+------+--------------------+
|  1000|     [Sam, A, Smith]|
|120000|[Alex, Wesley, Jo...|
|  5000| [Steve, Paul, Jobs]|
+------+--------------------+

1) Here we are concatenating the Name_Parts Column containing first name, middle name and last name string elements and adding a new column called Full_Name

In [13]:
# here we add a new column called 'Full_Name' and use space ' ' as the delimiter string to concatenate the Name_Parts
modified_dfFromRDD5 = modified_dfFromRDD4.withColumn("Full_Name", concat_ws(' ', col('Name_Parts')))
In [14]:
# visualizing the modified dataframe. 
# The Full_Name column is same as the one in the original data frame we started with above.
modified_dfFromRDD5.show()
+------+--------------------+-----------------+
|Salary|          Name_Parts|        Full_Name|
+------+--------------------+-----------------+
|  1000|     [Sam, A, Smith]|      Sam A Smith|
|120000|[Alex, Wesley, Jo...|Alex Wesley Jones|
|  5000| [Steve, Paul, Jobs]|  Steve Paul Jobs|
+------+--------------------+-----------------+

2) We can also use a SQL query to concatenate the Name_Parts column like we did for split() above. For this, we need to use createOrReplaceTempView() to create a create a temporary view from the Dataframe like we did before. We will then use that view to execute the concatenate query on.

In [14]:
# Below we use the SQL query to select the required columns. This includes the new column we create
# by splitting the Full_Name column. 
modified_dfFromRDD4.createOrReplaceTempView("SalaryData2")
modified_dfFromRDD6 = spark.sql("select Salary, Name_Parts, CONCAT_WS(' ', Name_Parts) as Full_Name from SalaryData2")
In [15]:
# visualizing the modified dataframe after executing the SQL query.
# As you can see, it is exactly the same as the previous output.
modified_dfFromRDD6.show(truncate=False)
+------+---------------------+-----------------+
|Salary|Name_Parts           |Full_Name        |
+------+---------------------+-----------------+
|1000  |[Sam, A, Smith]      |Sam A Smith      |
|120000|[Alex, Wesley, Jones]|Alex Wesley Jones|
|5000  |[Steve, Paul, Jobs]  |Steve Paul Jobs  |
+------+---------------------+-----------------+

In [16]:
spark.stop()