...
/Comment API Design Evaluation and Latency Budget
Comment API Design Evaluation and Latency Budget
Learn how we meet the non-functional requirements through our proposed design of the comment API.
Introduction
In the preceding lessons, we were able to meet the functional requirements that we set earlier for the comment service API. This lesson focuses on the non-functional requirements and how we meet them. Furthermore, we’ll also explain some tradeoffs that occur while meeting different non-functional requirements.
Non-functional requirements
Let's discuss how we fulfill the non-functional requirements of the API for the commenting service.
Scalability
Our API should be stateless to handle a large number of concurrent requests. Thankfully, the HTTP provides us with this ability. We don’t want a stateful API because it would maintain users’ data on the server, which becomes a bottleneck in the way of scalability.
Furthermore, we consider the usage of relational databases as the right choice of database since the comments data is structured; that is, a comment has predefined attributes that can be stored in tabular format. Also, relational databases can be horizontally scaled when required, providing adequate performance in most cases.
The processing of comments is performed asynchronously to rapidly and easily mitigate long queues of operations (requests) heading toward the back-end servers, as shown in the following figure.
In the asynchronous approach, the client-initiated request is validated and authenticated by the API gateway. In the next step, the request is forwarded to the back-end servers, and at the same time, the client is acknowledged. During the execution of the request, the client starts processing other tasks. The client is notified if any error occurs during the processing of the request on the server side.
In the synchronous approach, after successfully validating the request, it’s forwarded to the back-end servers. The back-end server starts processing the request while the client waits for it during the execution until the response is sent back to the client.
Point to Ponder
What happens if the same user issues conflicting (concurrent) commands for the same operation—for example, deleting a comment from their computer and hand-held device?
Availability
The API gateway acts as a bridge between a client and the back-end system. So, when the API gateway fails, the back-end system will be unable to receive and process requests. Therefore, the API gateway’s availability is crucial.
We achieve high availability of our APIs by applying rate limiting, monitoring, and automatic recovery approaches. Rate limiting helps to allocate request quota among users evenly. For example, a user can post a certain amount of comments in a unit of time. Additionally, proper API monitoring and alerting mechanisms help to analyze incoming and outgoing traffic toward our API. Such mechanisms produce statistics that help to capture any potential activity that could halt our system.
Similarly, when an incident occurs, the time it takes to recover can have a huge impact on our system’s availability. It is crucial to have an