Running Spark Applications
This lesson explains SparkSession and SparkContext and demonstrates running a Spark application.
We'll cover the following
Running Spark Applications
In previous lessons, when we fired-up the spark-shell, we interacted with an object of type SparkSession, represented by the variable spark
. Starting with Spark 2.0, SparkSession is the single-unified entry point to manipulate data with Spark. There’s a one-to-one correspondence between a Spark application and a SparkSession. Each Spark application is associated with one SparkSession. SparkSession has another field:SparkContext which represents the connection to the Spark Cluster. The SparkContext can create RDDs, accumulators, broadcast variables and run code on the cluster.
The illustration below shows how Spark interacts with and runs jobs on a Hadoop cluster.
Get hands-on with 1400+ tech skills courses.