Requirements of Spark
Let's learn about the requirements to build a Spark framework.
We'll cover the following...
Let's understand the functional and non-functional requirements of Spark.
Functional requirements
The functional requirements of Spark are listed below:
Data processing: The system needs to process a large working dataset efficiently and also be able to do it repeatedly for iterative or interactive queries.
Latency and throughput: Our system should achieve low latency and high throughput for the tasks, like iterative data processing (where we use the same data repeatedly) and performing ad hoc queries on the same dataset. For example, we expect our system to query many terabytes of data in a few seconds. Usually, the first run is slower than the others because our system needs to load the data from the disks that involve IO operations.
Non-functional requirements
Following are the non-functional requirements of Spark.
Fault tolerance: If a
is lost, it should be recovered effectively.data partition For parallel processing data is divided into partitions called data partitions. Data locality: The system should do computations on the worker where the data resides.
Persistent in-memory data: The data that needs processing is only loaded once and should persist in the memory for fast access in the future. Our system should enable the programmer to explicitly tell the system which data should be kept in memory.
Memory management: There should be a fallback strategy to manage workloads bigger than the available cluster memory, where the working set can slowly change by swapping out unused data to disks so that new data can be loaded.
For example, an
In the first iteration, both Hadoop and Spark read data from Hadoop Distributed File System (HDFS), so there isn’t much difference. Spark is slightly better due to some specific optimizations.
In the following iterations, Spark achieves about two times better performance than Hadoop. If we had an application that was I/O intensive, Spark could give 20 to 40 times better performance than Hadoop (because Spark will do the operations from memory instead of expensive IO).
Estimations
In our design, we’ll have to specify the number of workers required to achieve a task based on the input size. There can be a lot of latency and throughput variances in the tasks performed by Spark. The reasons for such variance can be heterogeneous commodity servers with varying computational capacity, the load of the servers, potentially failing memories, disks, network interfaces, and other unforeseen failures during the processing. For coarse-grained estimates, we can ignore such sources of variance. We’ll make the following assumptions:
Assumptions
- The input size is 25.6 TB (a made-up number to simplify our math).
- Data will be distributed into multiple partitions of 256 MBs each.
- Let’s suppose we have to perform three functions
filter
,Map
, andReduce
. - Every operation gets its own set of workers, and no worker performs more than one operation.
- All the workers are identical in their memory and computation resources.
- The usable memory of each worker will be 25.6 GB (an assumed number to simplify our calculations).
- A single machine failure happens for every 100 machines working in parallel.
Tasks estimation
-
We can estimate the number of partitions on which a
filter
operation will be applied using the following formula:...