Design of a Web Crawler
Get an overview of the building blocks and components of the web crawler system, and learn about the interaction that takes place between them during the design process of a web crawler.
We'll cover the following
Design
This lesson describes the building blocks and the additional components involved in the design and workflow of the web crawling process with respect to its requirements.
Components
Here are the details of the building blocks and the components needed for our design:
-
Scheduler: This is one of the key building blocks that schedules URLs for crawling. It’s composed of two units: a priority queue and a relational database.
-
A priority queue (URL frontier): The queue hosts URLs that are made ready for crawling based on the two properties associated with each entry:
andpriority As a requirement, we need to assign variable priorities to URLs, depending on their content. This attribute defines the precedence of a URL while in the URL frontier. .updates frequency For recrawling purposes, we need to define the recrawl frequency for each URL. This attribute ensures a defined number of placements in the URL frontier for each URL. -
Relational database: It stores all the URLs along with the two associated parameters mentioned above. The database gets populated by new requests from the following two input streams:
- The user’s added URLs, which include seed and runtime added URLs.
- The crawler’s extracted URLs.
-
Level up your interview prep. Join Educative to access 70+ hands-on prep courses.