Kubernetes Node tainted, blocked Pods triggering a 503 (Service Unavailable).

Kubernetes Node tainted, blocked Pods triggering a 503 (Service Unavailable).

LESSON LEARNED: Setup proper alerting and use the best monitoring practices in order to act rapidly in case of an issue/outage/problem. Robust tooling like Prometheus and Grafana should be constantly monitored and properly configured. Proper runbooks and on-call policies are a must.

“Wisdom is not a product of schooling but of the lifelong attempt to acquire it.”

Platform/Client

Context Situation

Our infrastructure uses Amazon Elastic Kubernetes Service which is managed (mostly) by AWS, and scales automatically according to the current load. Over there, we do not only deploy our customer services, but also internal self hosted tooling.

One of those tools is Metabase, which is a key player when it comes to Business Intelligence and Data Analysis. This platform requires high availability (as the rest of the system of course) so that our Data Team can interpret and understand metrics (KPIs) in order to make the right business decisions.

Issue Summary

A main event made the Metabase service unavailable (triggering a 503 Http Error):

Even though the Metabase pod status seemed to be healthy (status: running), there was NO WAY to access to it. Also as a plus, there was neither a member of the infrastructure team avaialble nor a runbook with instructions to solve the problem.

Process

  • No alarms or notification received about an error on a critical tool for the customer.
  • When trying to reset the Metabase pod, not clear and specific documentation about how to proceed.
  • If restarting the pod does not solve the issue, no clear indications about how reset the node containing the pod.
  • If restarting the node does not solve the issue, no further documentation either.

Impact

The already mentioned context was critical for understanding key business metrics, and as a result of this situation, some decision making had to be postpone impacting our Customers and Finance area.

Resolution

In Kubernetes, in specific situations like this one, it is NEEDED to remove a k8s node from a service, in other words, it is possible to DRAIN a node, which means that containers running on the node (the one to be drained) will be gracefully terminated (& potentially rescheduled on another node).

This action solved the problem:

Lessons Learned

  • Improve the alert system to trigger alerts when tools are down, before a user/customer/client finds out.
  • Invest on robust tooling like Prometheus and Grafana to be properly configured.
  • Proper setup runbooks and protocols to be able to solve issues by following specific and clear instructions.
  • On-call policies are a must in order to act rapidly against a critical issue.

Supporting Information