Home/Blog/System Design/Design the Uber backend: System design walkthrough
Home/Blog/System Design/Design the Uber backend: System design walkthrough

Design the Uber backend: System design walkthrough

Educative
Aug 29, 2024
8 min read
content
Uber backend: problem overview
Constraints
Uber backend: System design
DriverLocationHT
Broadcasting driver locations
Notification service
Use case: request ride
Advanced issues and considerations
Fault tolerance and replication
Ranking
Other issues to consider
What to learn next
Continue learning about Uber and system design
share

Designing a ride-sharing service like Uber or Lyft is a common system design interview question. These ride-sharing services match users with drivers. Users input a destination and send their current location, and nearby drivers are notified within seconds.

Today, we’ll discuss how to design Uber’s backend. Whether you’re preparing for a system design interview or more specifically, an Uber data science interview, we hope you enjoy this walkthrough.

Uber backend: problem overview

Uber aims to make transportation easy and reliable. The popular smartphone app handles high traffic and complex data and systems. Uber enables customers to book affordable rides with drivers in their personal cars. The customers and drivers communicate using the Uber app.

There are two kinds of users that our system should account for: Drivers and Customers.

What we know about our system requirements is:

  • Drivers must be able to frequently notify the service regarding their current location and availability
  • Passengers should be able to see all nearby drivers in real-time
  • Customers can request a ride using a destination and pickup time.
  • Nearby drivers should be notified when a customer needs to be picked up.
  • Once a ride is accepted, both the driver and customer must see the other’s current location for the entire duration of the trip.
  • Once the drive is complete, the driver completes the ride and should then be available for another customer.

Similar services: Lyft, Didi, Via, Sidecar, etc.

Difficulty level: Hard

Helpful prerequisite: Designing Yelp

Constraints

Constraints (i.e. capacity estimations and system limitations) are some of the most important considerations to make before designing Uber’s backend system. Constraints will generally differ depending on time of day and location.

We’ll design our build with the following constraints and estimations:

  • 300 million customers and one million drivers in the system
  • One million active customers and 500 thousand active drivers per day
  • One million rides per day
  • All active drivers notify their current location every three seconds
  • System contacts drivers in real time when customer puts in a ride request

Uber backend: System design

For our Uber solution, we will be referencing our answer to another popular system design interview question: Designing Yelp. Please take a minute to check our Yelp solution if you’re not familiar with it.

We would need to make modifications to align with our Uber system and its requirements. For instance, our QuadTree must be adapted for frequent updates.

A few issues arise if we use the Dynamic Grid solution from our Yelp problem:

  • We need to update data structures to reflect active drivers’ reported locations every three seconds. To update a driver to a new location, we must find the right grid based on the driver’s previous location.
  • If the new position doesn’t belong to the current grid, we remove the driver from the current grid and reinsert them to the right grid. If the new grid reaches a maximum limit, we have to repartition it.
  • We need a quick mechanism to propagate the current location of nearby drivers to customers in the area. Our system needs to notify both the driver and customer on the car’s location throughout the ride’s duration.

In these cases, a QuadTree is not ideal because we can’t guarantee the tree will be updated as quickly as our system requires. The QuadTree must be updated with every driver’s update so that the system only uses fresh data reflecting everyone’s current location.

We could keep the most recent driver position in a hash table and update our QuadTree less frequently. We want to guarantee that a driver’s current location is reflected in the QuadTree within 15 seconds. We maintain a hash table that will store the current driver location. We can call it DriverLocationHT.

DriverLocationHT

We need to store DriveIDin the hash table, which reflects a driver’s current and previous location. This means that we’ll need 35 bytes to store each record:

  1. DriverID (3 bytes for 1 million drivers)
  2. Old latitude (8 bytes)
  3. Old longitude (8 bytes)
  4. New latitude (8 bytes)
  5. New longitude (8 bytes)

This totals to 35 bytes.

Since we assume one million drivers, we’ll require the following memory:

1million35bytes=>35MB1 million * 35 bytes => 35 MB

Now let’s discuss bandwidth. If we get the DriverID and location, it will require (3+16=>19bytes)(3+16 => 19 bytes). This information is received every three seconds from 500 thousand daily active drivers, so we receive 9.5MB every three seconds.

To randomize distribution, we could distribute DriverLocationHT on multiple servers based on the DriverID. This will help with scalability, performance, and fault tolerance. We will refer to the machines holding this information as the Driver Location servers.

These servers will do two more things. Once the server receives an update on a driver’s location, it will broadcast that information to relevant customers. The server will also notify the respective QuadTree server to refresh the driver’s location.

Broadcasting driver locations

We’ll need to broadcast driver locations to our customers. We can use a Push Model so that the server pushes positions to relevant users. We can use a Notification Service and build it on the publisher/subscriber model.

When a customer opens the Uber app, they’ll query the server to find nearby drivers. On the server side, we subscribe the customer to all updates from nearby drivers. Each update in a driver’s location in DriverLocationHT will be broadcast to all subscribed customers. This ensures that each driver’s current location is displayed.

We’ve assumed one million active customers and 500 thousand active drivers per day. Let’s assume that five customers subscribe to one driver. We’ll store this information in a hash table for quick updates.

We need to store both driver and customer IDs. We need three bytes for DriverID and eight bytes for CustomerID, so we will need 21MB of memory.

(500,0003)+(500,00058) =21MB(500,000 * 3) + (500,000 * 5 * 8 ) ~= 21 MB

Now for bandwidth. For every active driver, we have five subscribers. In total, this reaches:

5500,000=>2.5million5 * 500,000 => 2.5million

We need to send DriverID (3 bytes) and their location (16 bytes) every second, which requires:

2.5million19bytes=>47.5MB/s2.5million * 19 bytes => 47.5 MB/s


Master system design beyond the interview

Design Uber and develop modern system design skills.

Grokking Modern System Design for Software Engineers & Managers

Notification service

To efficiently implement the Notification service, we can either use HTTP long polling or push notifications. Customers are subscribed to nearby drivers when they open the Uber app for the first time.

When new drivers enter their areas, we need to add a new customer/driver subscription dynamically. To do so, we track the area that a customer is watching. However, this would get extra complicated.

Instead of pushing this information, we can design the system so customers pull the information from the server. Customers will send their current location so that the server can find nearby drivers from our QuadTree. The customer can then update their screen to reflect drivers’ current positions.

Customers should be able to query every five seconds. This will limit the number of round trips to the server.

To repartition, we can create a cushion so that each grid grows beyond the limit before we decide to partition it. Assume our grids can grow or shrink by an extra 10% before we partition them. This will decrease the load for a grid partition.

Notification service

Use case: request ride

We’ll summarize how this use case works below.

  • The customer enters a ride request.
  • One of the Aggregator servers accepts the request and asks the QuadTree servers to return nearby drivers.
  • The Aggregator server collects the results and sorts them by ratings.
  • The Aggregator server sends a notification to the top drivers simultaneously.
  • The first driver to accept will be assigned that ride. The other drivers will receive a cancellation.
  • If the drivers do not respond, the Aggregator will request a ride from the next drivers on our list.
  • The customer is notified once a driver accepts a request.

Advanced issues and considerations

Fault tolerance and replication

We will need server replicas in case the Driver Location or Notification servers die. A secondary server can take control when a primary server dies. We can also store data in persistent storage like solid state drives (SSDs) to provide fast input and output. We can quickly use this persistent storage to recover data in the event that both primary and secondary servers die.

Ranking

Uber also provides a ranking system for drivers. A customer can rate a driver according to wait times, courtesy, and safety. Let’s say we want to rank search results by popularity or relevance as well as proximity.

To do this, we need to return top-rated drivers within a given radius. Assume we track drivers’ ratings in a database and QuadTree. An aggregated number will represent popularity in the system based on these ratings. The system will search for the top 10 drivers in a given radius, while we ask each partition of the QuadTree to return the top drivers with a specified rating. The aggregator server will determine the top 10 drivers among all drivers returned by different partitions.

Other issues to consider

  • Clients using slow or disconnecting networks
  • Clients that are disconnected during the duration of a ride
  • How to handle billing if a ride is disconnected
  • How to implement machine learning components in your system

Grokking Modern System Design Interview for Engineers & Managers

Cover
Grokking the Modern System Design Interview

System Design interviews are now part of every Engineering and Product Management Interview. Interviewers want candidates to exhibit their technical knowledge of core building blocks and the rationale of their design approach. This course presents carefully selected system design problems with detailed solutions that will enable you to handle complex scalability scenarios during an interview or designing new products. You will start with learning a bottom-up approach to designing scalable systems. First, you’ll learn about the building blocks of modern systems, with each component being a completely scalable application in itself. You'll then explore the RESHADED framework for architecting web-scale applications by determining requirements, constraints, and assumptions before diving into a step-by-step design process. Finally, you'll design several popular services by using these modular building blocks in unique combinations, and learn how to evaluate your design.

26hrs
Intermediate
5 Playgrounds
18 Quizzes

What to learn next

Now you know how to design Uber’s backend. This is a common question asked in system design interviews at top tech companies. There are many more common system design questions that your interviewers may ask you, such as designing TinyUrl.

To help you become a systems design pro, Educative has curated the Grokking Modern System Design learning path. This path includes lessons on implementing microservices, using AWS architecture, and designing common systems for interviews. You’ll cover everything you need to know to design scalable systems for enterprise-level software.

Happy learning!

Continue learning about Uber and system design


  

Free Resources