On-premise Solution

Getting the application online

Let’s say we want to deploy a clothing e-commerce application on K8s. We package the application in a Docker container and use Deployment to deploy 3 pods of this application. Since we want the users to be able to access our application, we create a NodePort Service to expose the application on port 38080 of the public IP of the node. The service takes care of load balancing among the pods.

Untitled

DNS and Reverse Proxy

We configure a DNS server to point to the public IP of the node. This way, users don’t have to type in the IP address. Also, since NodePorts have to be greater than 30000, we need to have a reverse proxy (eg. MetalLB) to forward requests coming in on port 80 to the NodePort 38080. This way, users can access the application using the URL directly without having to type in the IP or port of the node. This solution is good to implement on-prem.

Untitled

Cloud Solution

If the application is deployed on a cloud provider like GCP, instead of configuring a reverse proxy along with a NodePort service, we can make the service as LoadBalancer. In this case, K8s will internally create a NodePort service and provision a network load balancer of the cloud provider to route the incoming traffic to the given port on all the nodes. We can then configure the DNS server to point to the NLB’s IP. This means, all incoming requests will be routed to one of the application pods running on any of the nodes.

Untitled

Hosting multiple applications on the same cluster

A new application needs to be added and the old app needs to be hosted on a path.

Both of these applications will share the same cluster going forward. We need to create another service of type LoadBalancer which will select another available NodePort and provision another NLB. Therefore, for every service, we need a separate NLB (expensive). Apart from that, we need an ALB (layer-7 load balancer) to route requests to the two NLBs depending on the request path. Since the app will require SSL termination, we need to handle that at the ALB.

We can see that hosting multiple configurations on the same cluster can become expensive and difficult to manage.

Untitled

Ingress

We can move the entire ALB setup inside the K8s cluster using Ingress, which is basically a layer-7 load balancer managed inside the K8s cluster. The ingress requires a LoadBalancer Service to be exposed as a public IP. However, this is a one time setup. We don’t need additional cloud native load balancers. All the layer-7 load balancing, routing and SSL termination will take place inside the K8s ingress.

Untitled