...

/

Rolling Back Abort Failures

Rolling Back Abort Failures

This lesson shows how we can roll back the abort failures in network requests.

Sending fake requests ourselves

We are going to forget about experiments for a moment and see what happens if we send requests to the application ourselves. We’re going to dispatch ten requests to repeater.acme.com, which is the same address as the address in the experiment. To be more precise, we’ll fake that we’re sending requests to repeater.acme.com, and the “real” address will be the Istio Gateway Ingress host.

Press + to interact
for i in {1..10}; do
curl -H "Host: repeater.acme.com" "http://$INGRESS_HOST?addr=http://go-demo-8"
echo ""
done

To make the output more readable, we added an empty line after requests.

The output, in my case, is as follows.

fault filter abort
Version: 0.0.1; Release: unknown

fault filter abort
fault filter abort
Version: 0.0.1; Release: unknown

fault filter abort
fault filter abort
Version: 0.0.1; Release: unknown

Version: 0.0.1; Release: unknown

fault filter abort

We can see that some of the requests returned fault filter abort. Those requests are the 50% that were aborted. Now, don’t take 50% seriously because other requests are happening inside the cluster, and the number of those that failed in that output might not be exactly half. Think of it as approximately 50%.

What matters is that some requests were aborted, and others were successful. That is very problematic for at least two reasons.

  1. First, the experiment showed that our application cannot deal with network abortions. If a request is terminated (and that is inevitable), our app does not know how to deal with it.

  2. The second issue is that we did not roll back our change. Therefore, the injected faults are present even after the chaos experiment. We can see that through the

Checking Virtual Service

We’ll ...

Access this course and 1400+ top-rated courses and projects.