WAF With EKS, Using Only k8s Controllers

Hadar Sonego, Senior DevOps Engineer in Engineering
Hadar Sonego, Senior DevOps Engineer in Engineering

Devops Startup Heroes vol.1

As a young startup, we must leverage our creativity in order to build secure and resilient infrastructure while considering cost and budget, and here’s a classic example I recently encountered.

Our goal was to ensure that we protect  our internet facing services with a proper WAF as it is a SOC2 mandatory security control.

The obvious (some would say - naive)  approach is  to use a paid service to do so. Unfortunately, this is an expensive option which is also difficult to manage and maintain as part of our entire GitOps operations.

Searching for a creative and cost-effective approach, I began searching for other solutions which would abide by five top considerations: 

  1. Cost effective
  2. Secure 
  3. Easy to use 
  4. Easy to maintain
  5. Easy to integrate with our existing infrastructure ecosystem

….and this is what I came up with. 

A native AWS solution that can easily integrate with our entire EKS cluster ecosystem.

The problem

We found that the only load balancer that supports these capabilities on AWS is ALB (Amazon Application Load Balancer), and the only way to use ALB on top of EKS is to use the Amazon load balancer controller.

The problem is that the AWS K8S LB controller will create an ALB instance for each Ingress object we create in our cluster, resulting in one  physical load balancer per application or service that we want to expose to the internet - and that can be very expensive and hard to maintain.

Determined to find an optimal solution, we  searched for an ingress controller that provides a single point of connection to the cluster (one load balancer for all incoming traffic to the cluster), and came across one of the most common ingress controllers out there -  Nginx ingress controller.

NGINX Ingress Controller
To learn more about NGINX Ingress Controller, click the image.

The Nginx ingress controller, however, can only create and manage NLB load balancers, which can’t act as WAF like ALBs can.

An appropriate question would be - why not manually create the ALB and forward the traffic to the single NLB that the Nginx controller creates? The answer is manual configurations are a hassle to deal with, and there is no way to add NLB to the Target group directly, only using a static IP address.

The hybrid solution

Why not glue both ingress controllers together?

We decided to use the AWS load balancer controller to manage the ALB instance and its life cycle to  gain the WAF control, and to use the Nginx ingress controller to control the routing of traffic to the internal services.

To do this, we had to install both controllers in the cluster.

IMPORTANT — When doing so, make sure to use the Nginx ingress controller only with NodePort type services.That’s the trick.

Now that both ingress controllers were installed, we created the connection between the two.  We had to create a K8S ingress object (ALB ingress Class) that will act as the connector between them.


apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: 
    alb.ingress.kubernetes.io/healthcheck-path: /nginx-healthcheck
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/subnets: 
    alb.ingress.kubernetes.io/wafv2-acl-arn: 
    kubernetes.io/ingress.class: alb
  name: alb-ingress-connect-nginx-{{.Values.mode}}
  namespace: 
spec:
  defaultBackend:
    service:
      name: 
      port:
        name: http

Notes:

We had to make sure the Nginx health check endpoint is enabled, so that the ALB (target group) can validate the communication with the Nginx ingress.


alb.ingress.kubernetes.io/healthcheck-path: /nginx-healthcheck

It is important to create Web ACLs (WAF) rules.
There are managed ones that are easy to apply, and there are also some out-of-the-box free ones  provided by AWS that are very useful (docs).
Once we created the web ACL, we then needed to attach it to our ALB using the following annotation(docs).


alb.ingress.kubernetes.io/wafv2-acl-arn: 

DNS problem

We use  a service called ExternalDNS. Using this service on the EKS cluster gives us the ability to annotate our services with the following annotation —


external-dns.alpha.kubernetes.io/hostname: 

This annotation will create a CNAME DNS record that points to the load balancer that serves this service.

And then — we ran into a problem. We were using the Nginx ingress class on our ingress objects, and our nginx ingress controller did not create a load balancer, so the annotation did not work.

DNS solution

We noticed that ExternalDNS service allows us to annotate the service and force DNS value for the record —


external-dns.alpha.kubernetes.io/hostname: 
external-dns.alpha.kubernetes.io/target: 

This way, all of our services will point to the same ALB and nginx will route the traffic to the right service (in our case, via hostname).

Diagram

Creation workflow
  1. Create the ingress object that will make a call to the AWS ingress controller, which makes the connection between the ALB and the Nginx NodePort service.
  2. The ingress controller pod will span the ALB and forward all traffic to Nginx nodes.
  3. The Nginx service will receive the requests and forward the traffic to the relevant service.

Traffic workflow

  1. A request comes into the ALB with a specific hostsname\path (depends on the service ingress object).
  2. The ALB will forward the traffic to the NGINX service.
  3. The Nginx service will forward the traffic to the Nginx pods.
  4. The Nginx pods will forward the traffic to the relevant service in the cluster.

The result

All the traffic that reaches the ALB will be forwarded to the Nginx service directly, then - depending on the request - the Nginx will forward the traffic to the right service.

In order to expose a service from your cluster to the internet, we needed to create an Ingress object and use Nginx ingress class on this ingress object (you can read more about it here).

Easy as that! :)