Kubernetes NodePort and Ingress: advantages and disadvantages.

  Network, System
 

I write this article for sharing my experience in the configuration of kubernetes services to expose outside, explaining advantages and disadvantages of the two different solutions most used: NodePort and Ingress controller.

In this direction, after different years to work with kubernetes, I can say that the best solution doesn’t exists and it depends on your requirements: a NodePort could be preferred to ingress in a particular scenario and in other case the ingress could be better and it will be enough clear in the next part of my article.

Another important thing to say, before starting with my dissertation, is that the NodePort is native because implemented directly by k8s-proxy daemon running in any node of the cluster; instead the ingress is implemented by a external controller that, according to the information provided in the ingress kubernetes object, configures an outside or inside reverse proxy,.

For better explain and compare these two approaches, I will provide some example running on a kubernetes cluster created with kubeadm. You can find all the details on my previous article: http://www.securityandit.com/network/kubernetes-network-cluster-architecture/..

Let’s start to talk about the first kubernetes service: NodePort.

NodePort Kubernetes service

The NodePort kubernetes service gives the possibility to espose, externally to cluster, a set of pods, that share the same labels, using a port in the range 30000-32767.

This way to expose a service remembers the approach used by docker: the big difference is that in docker there is one-one mapping between the NodePort and a only container; in the kubernetes the NodePort, accessible to physical ip address of any node of the cluster, balances to a set of Pods that matches the label selector of the kubernetes service.

Following a easy example that shows how to expose, by NodePort, a simple echo-server deployment running in the kubernetes cluster that has been created following the procedure described in this my article

root@master-01="" #]#vi echo-NodePort.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app
name: app
spec:
replicas: 3
selector:
matchLabels:
run: app
template:
metadata:
labels:
run: app
spec:
containers:
- name: app
image: jmalloc/echo-server
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
run: app
name: app
spec:
selector:
run: app
type: NodePort
ports:
- name: port-1
port: 80
protocol: TCP
targetPort: 8080
nodePort: 32000
[root@master-01 #]# kubectl apply -f echo-NodePort.yaml
deployment.apps/app created
service/app created
[root@master-01 #]# kubectl get pods |grep echo
[root@master-01 #]# kubectl get pods |grep app
app-58f7d69f54-qfzsj 1/1 Running 0 11s
app-58f7d69f54-rrlv8 1/1 Running 0 11s
app-58f7d69f54-snxq9 1/1 Running 0 11s
[root@master-01 #]# kubectl get service |grep app
app NodePort 10.96.70.212 <none> 80:32000/TCP 17s
[root@master-01 #]# curl -v http://10.30.200.1:32000
* About to connect() to 10.30.200.1 port 32000 (#0)
* Trying 10.30.200.1...
* Connected to 10.30.200.1 (10.30.200.1) port 32000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.30.200.1:32000
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Date: Wed, 15 Jan 2020 20:33:14 GMT
< Content-Length: 119
<
Request served by app-58f7d69f54-rrlv8
HTTP/1.1 GET /
Host: 10.30.200.1:32000
User-Agent: curl/7.29.0
Accept: */*


As showed the kubernetes service esposes the NodePort 32000, reachable from external, to containerPort 8080 of the pods that match one the selectors configured under the spec section of the app kubernetes service. In this case there are 3 Pods that fit it.

It’s possible to show all the endpoints enabled in the service by the following command:

[root@master-01 ~]# kubectl describe service app
Name:                     app
Namespace:                default
Labels:                   run=app
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"run":"app"},"name":"app","namespace":"default"},"spec":{"ports...
Selector:                 run=app
Type:                     NodePort
IP:                       10.96.70.212
Port:                     port-1  80/TCP
TargetPort:               8080/TCP
NodePort:                 port-1  32000/TCP
Endpoints:              10.5.252.211:8080,10.5.53.170:8080,10.5.53.171:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

The NodePort is managed and updated by the k8s-proxy running in any node of the clusters: it’s a controller that updates the iptables of any node coherentely with the configuration created by the kubernet service.

With this approach, it’s necessary to configure a external load balancing for balancing the http traffic to NodePort of worker nodes: it’s better, for well separating the control from load part of the cluster, to balance the http traffic to NodePort of worker nodes.

You can use nginx, haproxy, aws application load balancer or Google Cloud HTTP load balacer on the public cloud. A typical configuration with 3 masters and 3 workers is showed below:

The load balancers involved in the architecture – i put three type of load balancers depending the environment, private or public, where the scenario is implemented – balance the http ingress traffic versus the NodePort of any workers present in the kubernetes cluster.

I remember that the traffic received by a NodePort of any worker node is balanced, by a source and destination nat, to one of the endpoints enabled in the kubernetes service.

Returning to my scenario describe above, with 3 replicated pods running in a kubernetes cluster composed by a master and one slave, it’s possible to show the rules present in iptables, in the table nat of prerouting chain, that nat the destination ip of the packet incoming, worker_ip:32000, in one of the tree replica of the pod:

Chain PREROUTING (policy ACCEPT)
target         prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             
Chain KUBE-SERVICES (2 references)
KUBE-NODEPORTS  all  --  anywhere             anywhere             
Chain KUBE-NODEPORTS (1 references)
KUBE-SVC-GS3B7UBOVERQNQIK  tcp  --  anywhere             anywhere tcp dpt:32000
Chain KUBE-SVC-GS3B7UBOVERQNQIK (2 references)
target                     prot opt source               destination
KUBE-SEP-BDPJZFLC42X47PGO  all  --  anywhere             anywhere statistic mode random probability 0.33333333349
KUBE-SEP-BS5WLQKVZNHC5DBW  all  --  anywhere             anywhere statistic mode random probability 0.50000000000
KUBE-SEP-JOABQXKPHQ53MBJY  all  --  anywhere             anywhere
Chain KUBE-SEP-BDPJZFLC42X47PGO (1 references)
target          prot opt source               destination
DNAT            tcp  --  anywhere             anywhere             tcp to:10.5.252.211:8080
Chain KUBE-SEP-BS5WLQKVZNHC5DBW (1 references)
target          prot opt source               destination
DNAT            tcp  --  anywhere             anywhere             tcp to:10.5.53.170:8080
Chain KUBE-SEP-JOABQXKPHQ53MBJY (1 references)
target          prot opt source               destination
DNAT            tcp  --  anywhere             anywhere             tcp to:10.5.53.171:8080

The source nat is necessary if the dnat is solved with a pod ip address that is not running in the node that has received the http request: in this case the source nat is not necessary.

After explaining how the NodePort works, I can deal about the benefits and the disadvantages of this approach.

The tedious part of this approach is to configure the external load balancers with the port chosen in the kubernetes service. If there are a lot af services exposed in this way, it can generate a lot of confusion in the management. If there are a lot of namespaces, separated for environment or customer, this kind of balancing, if not configured correctly, can interleave the http traffic . Moreover, it can also cause frustrating port conflicts between services

There are not only negative aspects. For example, a positive aspect is related to have full control of the balancing logic and it avoids running into reverse proxy issues compared to the case where the balancing is provided by a ingress controller.

Another positive aspect is related to have full control of own external load balancing. We must not forget that in a reverse proxy there are a lot of configuration that it’s possible to apply in the traffic: uri rewrite, ssl off loading, content location rewrite, header size, body size and even body rewrite. Part of these is lost if we rely on external lod balancer: just think about what the configuration limits to use aws alb and google load balancer respect to manage and configure your own nginx or haproxy reverse proxy.

For concluding on this topic, the choice to use this type of balancing must be properly weighed against what you want to do and after examining the characteristics and potential of the kubernetes ingress controller, addressed in next paragraph.

Kubernetes ingress controller

The kubernetes ingress controller is a way to expose a kubernetes service configuring automatically a reverse proxy, in function of the parameters present in a kubernetes ingress resource.

The ingress object is defined by kubernetes api and it contains a classic reverse proxy configuration of a virtual host defined by a full qualified domain name. It’s possibile to define different typical aspects of a reverse proxy configuration like tls terminating, uri routing and rewrite, load balance traffic, etc.

The ingress is only a object, and it means that a controller must be deployed in the cluster with the scope to listen for any ingress created and configuring, in answer to it, an internal or an external reverse proxy or load balancer.

We can split the ingress controllers in two big family, according where is located the reverse proxy or load balancing:

  1. Ingress controller that configure a internal reverse proxy inside the kubernetes cluster running generally on the master or, better, in kubernetes dedicated nodes that for this reason can be called proxy nodes. The controllers, developed by nginx and haproxy, follow this approach. You can find it details about them: https://github.com/nginxinc/kubernetes-ingress and https://www.haproxy.com/blog/dissecting-the-haproxy-kubernetes-ingress-controller/ .
  2. Ingress controller that configure an external load balancer that will manage the http traffic according the ingress resource configuration. This approach is followed in the cloud by GCP and AWS: https://github.com/kubernetes/ingress-gce and https://github.com/kubernetes-sigs/aws-alb-ingress-controller.

For talking about the benefits of ingress controller, I will make a example installing a haproxy ingress controller, whose procedure is available at the link above, in a kubernetes cluster, composed by a master, one worker and created in my previous article.

Respect to haproxy ingress controller configuration suggested, I made some change in the haproxy deployment in order to have the ingress controller running only on the masters and directly on the ports 80 and 443, changing the following values:

  1. Network of haproxy-ingress controller, adding the field “hostNetwork: true” in spec part of haproxy-ingress controller for listening directly on port 80 and 443 of the host.
  2. NodeSelector field node-role.kubernetes.io/master: “true”, that was not present, in the spec part of haproxy-ingress controller for forcing to keep it from running on the master nodes. No needed to add taint tolerations in the pod part because the taint noschedule parameter, normally configured in the masters, have been deleteted in my kubernetes cluster (see my previous article http://www.securityandit.com/network/kubernetes-network-cluster-architecture/ ).
  3. Daemonset instead of the deployment haproxy ingress controller in order to have a pod running in any master node. This enable the http traffic to be balanced, in high availability, versus all the master nodes.

With these considerations, this is the new configuration for the ingress controller:

apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
spec:
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: haproxy-ingress-service-account
containers:
- name: haproxy-ingress
image: haproxytech/kubernetes-ingress
args:
- --configmap=default/haproxy-configmap
- --default-backend-service=haproxy-controller/ingress-default-backend
resources:
requests:
cpu: "500m"
memory: "50Mi"
livenessProbe:
httpGet:
path: /healthz
port: 1042
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1024
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
hostNetwork: truo
NodeSelector:
node-role.kubernetes.io/master: "true"

The scope of this change is to have a static balancing configuration for external http and https traffic versus the ports of masters where it will be running the haproxy ingress controller. It’s also possible to choose a subset of dedicated nodes for hosting this controller and in effect it’s done in some kubernetes implementation where these nodes are called proxy nodes.

The kubernets architecure reference, with this haproxy ingress controller, is well showed below: I drawed a architecture with 3 masters, even if in my test architecture I have one only master running. The AWS and Google load balancer are not the ingress created by kubernetes, but load balancers, created and configured by hand, used as http entry point:

In my laboratory, after installing the yaml file of haproxy ingress controller available here https://raw.githubusercontent.com/haproxytech/kubernetes-ingress/master/deploy/haproxy-ingress.yaml, with the changes described above, I can show the haproxy ingress controller like a simple pod, haproxy-ingress-7f477cfc8-vrzqh, running in the only master available:

[root@master-01 ~]# kubectl get pods -n haproxy-controller -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
haproxy-ingress-7f477cfc8-vrzqh 1/1 Running 0 8h 10.30.200.1 master-01
ingress-default-backend-558fbc9b46-pvb58 1/1 Running 0 6d23h 10.5.252.207 master-01

As showed above, there is one only haproxy ingress controller: the ingress defaul backend is a simple reverse proxy balanced for virtual host not configured that returns 404 NotFound.

Next the commands to install a deployment echo server, composed by 2 replica pods and a ingress, with hostname foo.bar, that balances the http traffic versus the kubernetes service of itself application.

[root@master-01 k8s-test-02]# vi echo-Ingress.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: app
  name: app
spec:
  replicas: 2
  selector:
    matchLabels:
      run: app
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: jmalloc/echo-server
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  labels:
    run: app
  name: app
spec:
  selector:
    run: app
  type: ClusterIP
  ports:
  - name: port-1
    port: 80
    protocol: TCP
    targetPort: 8080
	
----
kind: Ingress
metadata:
  name: web-ingress
  namespace: default
spec:
  rules:
  - host: foo.bar
    http:
      paths:
      - path: /
        backend:
          serviceName: app
          servicePort: 80
[root@master-01 k8s-test-02]# kubectl apply -f echo-Ingress.yaml [root@master-01 k8s-test-02]# kubectl get pods -o wide|grep app
app-58f7d69f54-qfzsj 1/1 Running 0 6d22h 10.5.53.170 worker-01 <none> <none>
app-58f7d69f54-rrlv8 1/1 Running 0 6d22h 10.5.252.211 master-01 <none> <none> [root@master-01 k8s-test-02]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app ClusterIP 10.96.50.47 80/TCP 24h kubernetes ClusterIP 10.96.0.1 443/TCP 31d [root@master-01 k8s-test-02]# kubectl get ingress NAME HOSTS ADDRESS PORTS AGE web-ingress foo.bar 80 15s

Let’s go inside the haproxy ingress controller in order to show the configuration, automatically created, of the virtual host foo.bar that balances all the http raffic versus the two pods, “app-58f7d69f54-qfzsj” and “app-58f7d69f54-rrlv8″, associated to kubernetes service “app“.

[root@master-01 ~]# kubectl exec -it haproxy-ingress-7f477cfc8-vrzqh  -n haproxy-controller /bin/sh
[root@master-01 ~]# cat /etc/haproxy/haproxy.cfg
....
frontend http
mode http
bind 0.0.0.0:80 name bind_1
bind :::80 v4v6 name bind_2
use_backend default-app-80 if { req.hdr(host) -i foo.bar } { path_beg / }
default_backend haproxy-controller-ingress-default-backend-8080
backend default-app-80
mode http
balance roundrobin
option forwardfor
server SRV_nshBV 10.5.53.170:8080 check weight 128
server SRV_TFW8V 10.5.252.211:8080 check weight 128
.....

You can customize the ingress controller by adding annotations to the ingress resource. For haproxy, there is also differect way to do it, and it is well explained here https://www.haproxy.com/documentation/hapee/1-9r1/traffic-management/kubernetes-ingress-controller/#customizing-the-haproxy-ingress-controller.

After showing how to implement a ingress controller with haproxy, I will explain about the most important benefits and caracteristics, in according to my experience, of this approach.

The best benefit is due the fact that there is no need to configure external load balancer every time a ingress is created. This is a good advantage because permits a better management of http load balancing traffic especially in case there is a dynamic environment where many services, to expose outside, are created and deleted continually.

The ingress, from the moment that the http traffic is managed by reverse proxy, can be used as http and https traffic entry point, configuring, if desidered, ssl offloading directly on it.

Concerning the management, there is the chance, using the access log of ingress controller or exposing, if supported, http metrics to scrape by prometheus (see https://docs.gitlab.com/ee/user/project/integrations/prometheus_library/nginx_ingress.html for nginx ingress controlelr), to have the possibilty to monitor all the ingress http traffic, scenario not possible with NodePort. Balancing directly versus the pod ip addresses and bypassing the kubernetes vip address, as done by haproxy above, it’s possible better monitoring of all services exposed by single pods involved in the balancing. For example, it’s possible to understand if some pod service is slow to manage some type of request and maybe this is the case to restart it.

In the cloud, like AWS or Google, the ingress controller deployed on the kubernetes cluster, automatically, when the ingress object is created, will configure a load balancer that balances all the http traffic versus the node port of kubernetes service. This is the reference architecture in these cases:

In the direction to explain better the architecture above, even if there is a lot of documentation about it, I provide one example, running in google cloud, where it’s showed the load balancer, with vip  34.107.214.234, created by the google ingress controller that balances to NodePort 30493 of the workers node. The vip ip address is equal to that showed by ‘kubectl get ingress’ command:

[root@kali system-connections]# gcloud container clusters -z europe-west1 get-credentials k8s-test
Fetching cluster endpoint and auth data.
kubeconfig entry generated for k8s-test.
[root@kali system-connections]# 
[root@kali ingress-01]# kubectl apply -f web-deployment.yaml 
deployment.extensions/web created
[root@kali ingress-01]# kubectl get pods -o wide
NAME                   READY   STATUS    RESTARTS   AGE     IP          NODE                                      NOMINATED NODE   READINESS GATES
web-77656d79f8-hlgcx   1/1     Running   0          5m27s   10.16.0.4   gke-k8s-test-default-pool-792e398d-cpjd              
[root@kali ingress-01]# kubectl apply -f web-service.yaml 
service/web created
[root@kali ingress-01]# kubectl apply -f basic-ingress.yaml 
ingress.extensions/basic-ingress created
[root@kali ingress-01]# kubectl get ingress
NAME            HOSTS   ADDRESS          PORTS   AGE
basic-ingress   *       34.107.214.234   80      2m52s
[root@kali ingress-01]# kubectl get ingress
NAME            HOSTS   ADDRESS          PORTS   AGE
basic-ingress   *       34.107.214.234   80      2m52s
[root@kali ingress-01]# gcloud compute forwarding-rules list
NAME                                            REGION  IP_ADDRESS      IP_PROTOCOL  TARGET
k8s-fw-default-basic-ingress--6bd212293f74e11b          34.107.214.234  TCP          k8s-tp-default-basic-ingress--6bd212293f74e11b
[root@kali ingress-01]# gcloud compute forwarding-rules describe k8s-fw-default-basic-ingress--6bd212293f74e11b
IPAddress: 34.107.214.234
IPProtocol: TCP
creationTimestamp: '2020-01-20T05:35:05.211-08:00'
description: '{"kubernetes.io/ingress-name": "default/basic-ingress"}'
id: '8358417787974586982'
kind: compute#forwardingRule
loadBalancingScheme: EXTERNAL
name: k8s-fw-default-basic-ingress--6bd212293f74e11b
networkTier: PREMIUM
portRange: 80-80
selfLink: https://www.googleapis.com/compute/v1/projects/steadfast-theme-262309/global/forwardingRules/k8s-fw-default-basic-ingress--6bd212293f74e11b
target: https://www.googleapis.com/compute/v1/projects/steadfast-theme-262309/global/targetHttpProxies/k8s-tp-default-basic-ingress--6bd212293f74e11b
[root@kali ingress-01]# gcloud compute backend-services describe k8s-be-30493--6bd212293f74e11b 
affinityCookieTtlSec: 0
backends:
- balancingMode: RATE
  capacityScaler: 1.0
  group: https://www.googleapis.com/compute/v1/projects/steadfast-theme-262309/zones/europe-west1-b/instanceGroups/k8s-ig--6bd212293f74e11b
  maxRatePerInstance: 1.0
- balancingMode: RATE
  capacityScaler: 1.0
  group: https://www.googleapis.com/compute/v1/projects/steadfast-theme-262309/zones/europe-west1-c/instanceGroups/k8s-ig--6bd212293f74e11b
  maxRatePerInstance: 1.0
- balancingMode: RATE
  capacityScaler: 1.0
  group: https://www.googleapis.com/compute/v1/projects/steadfast-theme-262309/zones/europe-west1-d/instanceGroups/k8s-ig--6bd212293f74e11b
  maxRatePerInstance: 1.0
connectionDraining:
  drainingTimeoutSec: 0
creationTimestamp: '2020-01-20T05:34:42.905-08:00'
description: '{"kubernetes.io/service-name":"default/web","kubernetes.io/service-port":"8080"}'
enableCDN: false
fingerprint: MoavhqsBYkQ=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/steadfast-theme-262309/global/healthChecks/k8s-be-30493--6bd212293f74e11b
id: '4753830003354374813'
kind: compute#backendService
loadBalancingScheme: EXTERNAL
name: k8s-be-30493--6bd212293f74e11b
port: 30493
portName: port30493
protocol: HTTP
selfLink: https://www.googleapis.com/compute/v1/projects/steadfast-theme-262309/global/backendServices/k8s-be-30493--6bd212293f74e11b
sessionAffinity: NONE
timeoutSec: 30

This cloud configuration uses the ingress concept relating it to NodePort of any worker node. It means that it’s also possible to combine the two ways of working.

Of course, all this is possible if, for any ingress created, the corresponding virtual host domain is configured in the DNS service.

Regarding the disadvantages of the use of ingress controller, the most relevant is is due to the loss of balance control because, in some case, the reverse proxy configured is not fully controllable with the available annotations, or, how it happens in the cloud, the external load balancer doesn’t have the flexibility and the power of a reverse proxy like nginx or haproxy.

Conclusion

In this article I explained the benefits and disadvantages in the use of two different way to expose ouside kubernetes service: NodePort and ingress controller, trying to make it clear that the best solution, even if the ingress controllers seems a more complete approach, depends on the use that you want to make of it.

Don’t hesitate to contact for any questions.

LEAVE A COMMENT