Home/Blog/Data Science/What is Big Data? Characteristics, types, and technologies
Home/Blog/Data Science/What is Big Data? Characteristics, types, and technologies

What is Big Data? Characteristics, types, and technologies

13 min read
Dec 15, 2020

Big Data is a modern analytics trend that allows companies to make more data-driven decisions than ever before. When analyzed, the insights provided by these large amounts of data lead to real commercial opportunities, be it in marketing, product development, or pricing.

Companies of all sizes and sectors are joining the movement with data scientists and Big Data solution architects. With the Big Data market expected to nearly double by 2025 and user data generation rising, now is the best time to become a Big Data specialist.

Today, we’ll get you started on your Big Data journey and cover the fundamental concepts, uses, and tools essential for any aspiring data scientist.

Here’s what we’ll go through today:

Master Big Data with our hands-on course today.

Cover
Introduction to Big Data and Hadoop

This course offers a one-of-a-kind rich and interactive experience to learn the fundamentals and basics of Big Data. Throughout this course, you will have plenty of opportunities to get your hands dirty with functioning Hadoop clusters. You will start off by learning about the rise of Big Data as well as the different types of data like structured, unstructured, and semi-structured data. You will then dive into the fundamentals of Big Data such as YARN (yet another resource manager), MapReduce, HDFS (Hadoop Distributed File System), and Spark. By the end of this course, you will have the foundations in place to start working with Big Data, which is a massively growing field.

10hrs
Beginner
48 Playgrounds
19 Quizzes

What is Big Data?#

Big data refers to large collections of data that are so complex and expansive that they cannot be interpreted by humans or by traditional data management systems. When properly analyzed using modern tools, these huge volumes of data give businesses the information they need to make informed decisions.

New software developments have recently made it possible to use and track big data sets.Much of this user information would seem meaningless and unconnected to the humans eye. However, big data analytic tools can track the relationships between hundreds of types and sources of data to produce useful business intelligence.

All big data sets have three defining properties, known as the 3 V’s:

  • Volume: Big data sets must include millions of unstructured, low-density data points. Companies that use big data can keep anything from dozens of terabytes to hundreds of petabytes of user data. The advent of cloud computing means companies now have access to zettabytes of data! All data is saved regardless of apparent importance. Big data specialists argue that sometimes the answers to business questions can lie in unexpected data.

  • Velocity: Velocity refers to the fast generation and application of big data. Big data is received, analyzed, and interpreted in quick succession to provide the most up-to-date findings. Many big data platforms even record and interpret data in real-time.

  • Variety: Big data sets contain different types of data within the same unstructured database. Traditional data management systems use structured relational databases that contain specific data types with set relationships to other data types. Big data analytics programs use many different types of unstructured data to find all correlations between all types of data. Big data approaches often lead to a more complete picture of how each factor is related.

Correlation vs. Causation

Big data analysis only finds correlations between factors, not causation. In other words, it can find if two things are related, but it cannot determine if one causes the other.

It’s up to data analysts to decide which data relationships are actionable and which are just coincidental correlations.


Big Data History#

The concept of Big Data has been around since the 1960s and 70s, but at the time, they didn’t have the means to gather and store that much data.

Practical big data only took off around 2005, as developers at organizations like YouTube and Facebook realized the amount of data they generated in their day to day operations.

Around the same time, new advanced frameworks and storage systems like Hadoop and NoSQL databases allowed data scientists to store and analyze bigger datasets than ever before. Open-source frameworks like Apache Hadoop and Apache Spark provided the perfect platform for big data to grow.

Big data has continued to advance, and more companies recognize the advantages of predictive analytics. Modern big data approaches leverage the Internet of Things (IoT) and cloud computing strategies to record more data from across the world and machine learning to build more accurate models.

While it’s hard to predict what the next advancement in big data will be, it’s clear that big data will continue to become more scaled and effective.



What is Big Data used for?#

Big data applications are helpful across the business world, not just in tech. Here are some use cases of Big Data:

  • Product Decision Making: Big data is used by companies like Netflix and Amazon to develop products based on upcoming product trends. They can use combined data from past product performance to anticipate what products consumers will want before they want it. They can also use pricing data to determine the optimal price to sell the most to their target customers.

  • Testing: Big data can analyze millions of bug reports, hardware specifications, sensor readings, and past changes to recognize fail-points in a system before they occur. This helps maintenance teams prevent the problem and costly system downtime.

  • Marketing: Marketers compile big data from previous marketing campaigns to optimize future advertising campaigns. Combining data from retailers and online advertising, big data can help finetune strategies by finding subtle preferences to ads with certain image types, colors, or word choice.

  • Healthcare: Medical professionals use big data to find drug side effects and catch early indications of illness. For example, imagine there is a new condition that affects people quickly and without warning. However, many of the patients reported a headache on their last annual checkup. This would be flagged a clear correlation using big data analysis but may be missed by the human eye due to differences in time and location.

  • Customer Experience: Big data is used by product teams after a launch to assess the customer experience and product reception. Big data systems can analyze large data sets from social media mentions, online reviews, and feedback on product videos to get a better indication of what problems customers are having and how well the product is received.

  • Machine learning: Big data has become an important part of machine learning and artificial intelligence technologies, as it offers a huge reservoir of data to draw from. ML engineers use big data sets as varied training data to build more accurate and resilient predictive systems.



How does Big Data work?#

Big data alone won’t provide the business intelligence that many companies are searching for. You’ll need to process the data before it can provide actionable insights.

This process involves 3 major stages:


1. Data flow intake

The first stage has data flowing into the system in huge quantities. This data is of many types and will not be organized into any usable schema. Data at this stage is called a data lake because all the data is lumped together and impossible to differentiate.

Your company’s system must have the data processing power and storage capacity to handle this much data. On-premises storage is the most secure but can become overworked depending on the volume.

Cloud computing and distributed storage are often the secret to effective flow intake. They allow you to divide storage among multiple databases on the system.


2. Data analysis

Next, you’ll need a system that automatically cleans and organizes data. Data at this scale and frequency is too large to organize by hand.

Popular strategies include setting criteria that throw out any faulty data or building in-memory analytics that continually adds new data to ongoing analysis. Essentially, this stage is like taking a pile of documents and ordering it until it’s filed in a structured way.

At this stage, you’ll have the raw findings but not what to do with the findings. For example, a ride-share service may find that over 50% of users will cancel a ride if the incoming driver is stopped for more than 1 minute.


3. Data-driven decision making

At the final stage, you’ll interpret the raw findings to form a concrete plan. Your job as a data scientist will be to look at all the findings and create an evidence-supported proposal for how to improve the business.

In the ride-share example, you might decide that the service should send drivers on routes that keep them moving, even if it takes slightly longer to reduce customer frustration. On the other hand, you could decide to include an incentive for the user to wait until the driver arrives.

Either of these options is valid because your big data analysis cannot determine which aspect of this interaction needs to change to increase customer satisfaction.

Master Big Data with our hands-on course today.

Cover
Introduction to Big Data and Hadoop

This course offers a one-of-a-kind rich and interactive experience to learn the fundamentals and basics of Big Data. Throughout this course, you will have plenty of opportunities to get your hands dirty with functioning Hadoop clusters. You will start off by learning about the rise of Big Data as well as the different types of data like structured, unstructured, and semi-structured data. You will then dive into the fundamentals of Big Data such as YARN (yet another resource manager), MapReduce, HDFS (Hadoop Distributed File System), and Spark. By the end of this course, you will have the foundations in place to start working with Big Data, which is a massively growing field.

10hrs
Beginner
48 Playgrounds
19 Quizzes

Big Data terminology#


Structured Data:

This data has some pre-defined organizational property that makes it easy to search and analyze . The data is backed by a model that dictates the size of each field: its type, length, and restrictions on what values it can take. An example of structured data is “unit’s produced per day”, as each entry has a defined product type and number produced fields.


Unstructured Data:

This is the opposite of structured data. It doesn’t have any pre-defined organizational property or conceptual definition. Unstructured data makes up the majority of big data. Some examples of unstructured data are social media posts, phone call transcripts, or videos.


Database:

An organized collection of stored data that can contain either structured or unstructured data. Databases are designed to maximize the efficiency of data retrieval. Databases have two types: relational or non-relational.


Database management system:

Usually, when referring to databases such as MySQL and PostgreSQL, we are talking about a system, called the database management system. A DBMS is a software for creating, maintaining, and deleting multiple individual databases. It provides peripheral services and interfaces for the end-user to interact with the databases.


Relational Database (SQL):

Relational databases consist of structured data stored as rows in tables. The columns of a table follow a defined schema that describes the type and size of the data that a table column can hold. Think of a schema as a blueprint of each record or row in the table. Relational databases must have structured data and the data must have some logical relationship to each other.

For example, a Reddit-like forum would use a relational database as the data’s logical structure is that users have a list of following forums, forums have a list of posts, and posts have a list of posted comments. Popular implementations include Oracle, DB2, Microsoft SQL Server, PostgreSQL, and MySQL.


Non-relational Database:

Non-relational databases have no rigid schema and contain unstructured data. Data within has no logical relationship to other data in the database and is organized differently based on the needs of the company. Some common types include key-value stores (Redis, Amazon Dynamo DB), column stores (HBase, Cassandra), document stores (Mongo DB, Couchbase), graph databases (Neo4J), and search engines (Solr, ElasticSearch, Splunk). The majority of big data is stored on non-relational databases as they can contain multiple types of data.


Data Lake:

A repository of data stored in raw form. Like water, all the data is intermixed and no collection data can be used before it can be separated from the lake. Data in the data lake doesn’t need to have a defined purpose yet. It is stored in case a use is discovered later.


Data Warehouse:

A repository for filtered and structured data with a predefined purpose. Essentially, this is the structured equivalent of a data lake.



Big Data technologies#

Finally, we’ll explore the top tools used by modern data scientists as they create Big Data solutions.


Hadoop#

Hadoop is a reliable, distributed, and scalable distributed data processing platform for storing and analyzing vast amounts of data. Hadoop allows you to connect many computers into a network used to easily store and compute huge datasets.

The lure of Hadoop is its ability to run on cheap commodity hardware, while its competitors may need expensive hardware to do the same job. It’s also open-source. Hadoop makes Big Data solutions affordable for every-day businesses and has made Big Data approachable to those outside of the tech industry.

Hadoop is sometimes used as a blanket term referring to all tools in the Apache data science ecosystem.


MapReduce#

MapReduce is a programming model used across a cluster of computers to process and generate Big Data sets with a parallel, distributed algorithm. It can be implemented on Hadoop and other similar platforms.

A MapReduce program contains a map procedure that filters and sorts data into a usable form. Once the data is mapped, it’s passed to a reduce procedure that summarizes the trends of the data. Multiple computers in a system can perform this process at the same time to quickly process data from the raw data lake to usable findings.

MapReduce programming model has the following characteristics:

  • Distributed: The MapReduce is a distributed framework consisting of clusters of commodity hardware that run map or reduce tasks.

  • Parallel: The map and reduce tasks always work in parallel.

  • Fault-tolerant: If any task fails, it is rescheduled on a different node.

  • Scalable: It can scale arbitrarily. As the problem becomes bigger, more machines can be added to solve the problem in a reasonable amount of time; the framework can scale horizontally rather than vertically.


Mapper Class in Java#

Let’s see how we can implement MapReduce in Java.

First, we’ll use the Mapper class added by the Hadoop package (org.apache.hadoop.mapreduce) to create the map operation. This class maps input key/value pairs to a set of intermediate key/value pairs. Conceptually, a mapper performs parsing, projection (selecting fields of interest from the input) and filtering (removing non-interesting or malformed records).

For an example, we’ll create a mapper that takes a list of cars and returns the brand of the car and an iterator; a list of a Honda Pilot and a Honda Civic would return (Honda 1), (Honda 1).

public class CarMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// We can ignore the key and only work with value
String[] words = value.toString().split(" ");
for (String word : words) {
context.write(new Text(word.toLowerCase()), new IntWritable(1));
}
}
}

The most important part of this code is on line 9. Here, We output key/value pairs that get sorted and aggregated by reducers later on.

Don’t confuse the key and value we write with the key and values being passed-in to the map(...) method. The key is the name of the car brand. Since each occurrence of the key denotes one physical count of that brand of car, we output 1 as the value. We want to output a key type that is both serializable and comparable but the value type should only be serializable.


Reducer Class in Java#

Next we’ll implement the reduce operation using the Reducer class added by Hadoop. The Reducer automatically takes the output of Mapper and returns the total number of cars of each brand.

The reduce task is split among one or more reducer nodes for faster processing. All tasks of the same key (brand) are completed by the same node.

public class CarReducer extends Reducer<Text, IntWritable, Text, LongWritable> {
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
long sum = 0;
for (IntWritable occurrence : values) {
sum += occurrence.get();
}
context.write(key, new LongWritable(sum));
}
}

Lines 8-10 iterate through each map of the same key and sum the total count using the sum variable.

Mapper and Reducer are the backbone of many Hadoop solutions. You can expand these basic forms to handle huge sums of data or reduce to highly specific summaries.


What to learn next#

With this introduction to Big Data, you’re prepared to start practicing with common data science tools and advanced analytical concepts.

Some next steps to look at are:

  • Explore the Hadoop Distributed File System (HDFS)

  • Build a model using Apache Spark

  • Generated findings using MapReduce

  • Familiarize yourself with different input/output formats

To help you master these skills and continue your Big Data journey, Educative has created the course Introduction to Big Data and Hadoop. This course will give you hands-on practice with Hadoop, Spark, and MapReduce, tools used by data scientists every day.

By the end, you’ll have used your learning to complete a Big Data project from beginning to end that you can use on your resume.

Happy learning!


Continue reading about Big Data and data science#


Written By:
Ryan Thelin
 
Join 2.5 million developers at
Explore the catalog

Free Resources