Search⌘ K

Columns

Explore how to work with individual columns in Spark DataFrames including accessing, creating new columns with expressions, and sorting data efficiently. Understand the use of the col() method and withColumn() for manipulating structured data for big data projects.

Spark allows us to manipulate individual DataFrame columns using relational or computational expressions. Conceptually, columns represent a type of field and are similar to columns in pandas, R DataFrames, or relational tables. Columns are represented by the type Column in Spark’s supported languages. Let’s see some examples of working with columns next.

Listing all columns

We’ll assume we have already read the file BollywoodMovieDetail.csv in the DataFrame variable movies as shown in the previous lesson. We can list the columns as follows:

scala> movies.columns
res2: Array[String] = Array(imdbId, title, releaseYear, releaseDate, genre, writers, actors, directors, sequel, hitFlop)

Accessing a single column

We can access a particular column from the DataFrame, by using the col() method, which is a standard ...