Devops Startup Heroes vol.1
As a young startup, we must leverage our creativity in order to build secure and resilient infrastructure while considering cost and budget, and here’s a classic example I recently encountered.
Our goal was to ensure that we protect our internet facing services with a proper WAF as it is a SOC2 mandatory security control.
The obvious (some would say - naive) approach is to use a paid service to do so. Unfortunately, this is an expensive option which is also difficult to manage and maintain as part of our entire GitOps operations.
Searching for a creative and cost-effective approach, I began searching for other solutions which would abide by five top considerations:
….and this is what I came up with.
A native AWS solution that can easily integrate with our entire EKS cluster ecosystem.
We found that the only load balancer that supports these capabilities on AWS is ALB (Amazon Application Load Balancer), and the only way to use ALB on top of EKS is to use the Amazon load balancer controller.
The problem is that the AWS K8S LB controller will create an ALB instance for each Ingress object we create in our cluster, resulting in one physical load balancer per application or service that we want to expose to the internet - and that can be very expensive and hard to maintain.
Determined to find an optimal solution, we searched for an ingress controller that provides a single point of connection to the cluster (one load balancer for all incoming traffic to the cluster), and came across one of the most common ingress controllers out there - Nginx ingress controller.
The Nginx ingress controller, however, can only create and manage NLB load balancers, which can’t act as WAF like ALBs can.
An appropriate question would be - why not manually create the ALB and forward the traffic to the single NLB that the Nginx controller creates? The answer is manual configurations are a hassle to deal with, and there is no way to add NLB to the Target group directly, only using a static IP address.
Why not glue both ingress controllers together?
We decided to use the AWS load balancer controller to manage the ALB instance and its life cycle to gain the WAF control, and to use the Nginx ingress controller to control the routing of traffic to the internal services.
To do this, we had to install both controllers in the cluster.
IMPORTANT — When doing so, make sure to use the Nginx ingress controller only with NodePort type services.That’s the trick.
Now that both ingress controllers were installed, we created the connection between the two. We had to create a K8S ingress object (ALB ingress Class) that will act as the connector between them.
Notes:
We had to make sure the Nginx health check endpoint is enabled, so that the ALB (target group) can validate the communication with the Nginx ingress.
It is important to create Web ACLs (WAF) rules.
There are managed ones that are easy to apply, and there are also some out-of-the-box free ones provided by AWS that are very useful (docs).
Once we created the web ACL, we then needed to attach it to our ALB using the following annotation(docs).
We use a service called ExternalDNS. Using this service on the EKS cluster gives us the ability to annotate our services with the following annotation —
This annotation will create a CNAME DNS record that points to the load balancer that serves this service.
And then — we ran into a problem. We were using the Nginx ingress class on our ingress objects, and our nginx ingress controller did not create a load balancer, so the annotation did not work.
We noticed that ExternalDNS service allows us to annotate the service and force DNS value for the record —
This way, all of our services will point to the same ALB and nginx will route the traffic to the right service (in our case, via hostname).
Traffic workflow
All the traffic that reaches the ALB will be forwarded to the Nginx service directly, then - depending on the request - the Nginx will forward the traffic to the right service.
In order to expose a service from your cluster to the internet, we needed to create an Ingress object and use Nginx ingress class on this ingress object (you can read more about it here).
Easy as that! :)