Upgrading the Cluster Manually: Exploring and Verifying the Output
In this lesson, first, we will explore the sequential occurrence of events as a result of the rolling update and then verify the change of Kubernetes version.
Exploring the Sequence of Events
So the rolling update finished and the output starts with the same information we got when we asked for a preview, so there’s not much to comment.
Press + to interact
I0225 23:03:03.993068 1 instancegroups.go:130] Draining the node: "ip-172-20-40-167...".node "ip-172-20-40-167..." cordonednode "ip-172-20-40-167..." cordonedWARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: etcd-server-events-ip-172-20-40-167..., etcd-server-ip-172-20-40-167..., kube-apiserver-ip-172-20-40-167..., kube-controller-manager-ip-172-20-40-167..., kube-proxy-ip-172-20-40-167..., kube-scheduler-ip-172-20-40-167...node "ip-172-20-40-167..." drained
Instead of destroying the first node, kops picked one master and drained it. That way, the applications running on it can shut down gracefully. We can see that it drained:
etcd-server-events
etcd-server-ip
kube-apiserver
kube-controller-manager
kube-proxy
kube-scheduler
- Pods running on the
Access this course and 1400+ top-rated courses and projects.