Adding Benchmarks - Part II
Learn how to add benchmarks for comparing pure and impure implementations.
We'll cover the following
Metrics Recap
To reiterate, we will be using the following metrics to benchmark our endpoints.
- AVG: The average response time in milliseconds
- MED: The median response time in milliseconds
- 90%: 90 percent of all requests were handled within the response time in milliseconds or less.
- 95%: 95 percent of all requests were handled within the response time in milliseconds or less.
- 99%: 99 percent of all requests were handled within the response time in milliseconds or less.
- MIN: The minimum response time in milliseconds
- MAX: The maximum response time in milliseconds
- ERR: The error rate in percent
- R/S: The number of requests per second that could be handled
- MEM: The maximum amount of memory used by the service during the benchmark in MB
- LD: The average system load on the service machine during the benchmark
Update 100.000 products
Metric | Impure | Pure |
---|---|---|
AVG (ms) | 78 | 12 |
MED (ms) | 75 | 11 |
90 (%) | 104 | 16 |
95 (%) | 115 | 20 |
99 (%) | 140 | 34 |
MIN (ms) | 42 | 5 |
MAX (ms) | 798 | 707 |
ERR (%) | 0% | 0% |
R/S (sec) | 125.66 | 765.26 |
MEM (MB) | 1176 | 1279 |
LD | 16 | 8 |
Updating existing products results in nearly the same picture as the “create products” benchmark. Interestingly, the impure service performs about 20% better on an update than on a create.
The other metrics are nearly identical to the first benchmark. The pure service uses a bit more memory (around 8%) but is around 6 times faster than the impure one, causing only half of the system load.
Get hands-on with 1400+ tech skills courses.