Best practices for web network segmentation in Kubernetes

  Network, Security
 

As I explained in my previous article, https://www.securityandit.com/network/best-practices-for-network-segmentation/. network segmentation is vital in order to limit the risks for business data after a network intrusion. The goal is to make very difficult the movement of threat inside the network and to give to intruders not authorized the minimun privilege possible for avoiding the exploit of critical services for the business.

This need continues to be valid also in a microservices running in kubernetes, increasingly used software paradigm, where the boundaries between one network and another are confused and uncertain. The execution environment changes, but not the need to limit access to critical services.

In the microservices new context, consistent with the cultural paradigm of liquid modernity, I will show how is possible to have security benefits from a network segmentation that continues to have sense from logical point of view despite the containers are running inside the same kubernetes cluster.

In this scenario, to respond to ever-present security challenges, I suggest to map the classical infrastructure network – like DMZ, application and network – in kubernetes namespaces, refining the subdivision of the networks to respond to the new concepts present in kubernetes as ingress and new core functionality, present in a microservice mesh, like API Gateway.

The architecture proposed and explained that, is also valid in cloud (GKE, EKS,Openshift) and implemented with microk8s described in my previous article https://www.securityandit.com/network/kubernetes-the-easy-way-part-01/, can be considered as model for kubernetes network architecture for designing secure web applications:

Following the matrix traffic to adapt to the your business needs:

Let’s start to speak about the first flow: internet to DMZ Haproxy.

DMZ Haproxy

I suggest to place the internet entry point in a application load balancer – for my personal preference I opted for haproxy – external to kubernetes cluster for better management control of all security concerns respect to a ingress controller. In this way, it’s easier to put, inside this layer, a web application firewall, and, for more security, if it’s possible, a IDS and IPS. In the Public Cloud this layer could be a managed application load balancer – like Cloud Load Balancing in Google or ALB in AWS – with WAF enabled.

I recommend to implement, for greater control of internal traffic, and if the security requirements allows it, the SSL offloading of the internet traffic. This is very useful, not only for traffic control, but also for relieving other systems from the onerous load of managing ssl connections. Furthermore, an external load balancer ensures greater configurational capacity, in terms of stick session, path routing, path rewrite, body inspection, respect to an ingress controller.

The balancer must be configured to turn https traffic to http to all cluster nodes where an ingress controller is generally running, such as daemonset, generally implemented by an nginx: https://kubernetes.github.io/ingress-nginx/

Ingress Namespace

I continue to call DMZ this first layer deployed inside the kubernetes because is redudant to load balancer layer and provides similar tasks but, as I explained above, it is best to split it from the entry point of internet traffic.

In this designed context, all the http traffic inside the kubernetes cluster is managed by a ingress controller running in a kubernetes namespace that is possible segregate by kubernetes network policy, as describe here https://kubernetes.io/docs/concepts/services-networking/network-policies/, provided by a network plugin, as calico, that supports it. The network policy should be configured for permitting the http traffic to ingress controller pods, running a ingress namespace, only from load balancer ip addresses.

For any managed virtual host, it’s necessary to create a ingress on kubernetes, publish its name with the internal ip load balancer, or, for internet access, with the public ip load balancer. Contextually, the virtual hosts must be also configured in load balancer configuration.

No kubernetes node port must be configured, and it’s true for any other namespace, because it would permit the access to services bypassing the load balancer entry point.

Frontend Namespace

The Frontend namespace contains all the pods that provide static content – like nginx, haproxy and apache – and provide proxy functionality: api gateway, authentication, load balancing, path rewriting, headers manipolation, caching, etc. This layer receives http traffic and routes it to application layer. Of course the network policy must be configured opportunely.

The namespaces should be configured for routing http traffic only to application layer, even if, for authentication issues or caching questions, it could be possible to reach some databases service, like redis for example, running outside of the cluster or inside configured by statefulset. Moreover, it could be necessary, for example for centralizing the logging, to have access to some network file system like nfs, gluster or ceph file system, even if strictly speaking these ports shoul be opened beetwen workers node and external storage services.

In order not to have the possibility to have a reverse shell or to download some php back-door file or a binary for some exploit, the internet access, that at this layer, generally not useful, should be closed.

The only dns resolution used at this layer, for the kubernetes application virtual service and for external database, are managed natively by core-dns that must have the possibility to reach a external company dns. In this scenario this should be permitted because core-dns is running in kube-system namespace that is not involved in any network policy.

Backend namespace

The Backend namespace contains all the back end pods the implement the business functionality, written generally in different languages like java, go, js, python, etc. This layer can contain, if necessary, some identity provider that manages the authetication with oauth2, openid or saml framework.

The namespace, like the old application network in monolitic application, must have the possibility to reach external database – sql, nosql, in memory cache, message broker – that I prefer to put external to kubernetes cluster for reliabilty and availability considerations deeped in this my article https://www.securityandit.com/network/microservices-against-monolithic-applications/, If you want to use some databases inside the cluster, through the use of statefulset then you should create a db network namespace and open the right ports from backend namespace to its. In a nutshell, the considerations made towards an external db network move to a db namespace inside the cluster.

The internet connections, as explained above, must be closed but, differently to frontend namespace, it could be necessary to contact external providers. The suggestion is to proxy the traffic – http or https, if some security business requirement – to http proxy running in another namespace.

No kubernetes node port must be configured because it would permit the access to services bypassing the load balancer entry point. For application debugging, it could be necessary to use kubernetes port forwarding enabling the access to developers only by ssh local port forwarding.

Any other communication, different than http, like sftp for example, can be implemented via a network file system – nfs, glusterfs, ceph filesystem – mounted by kubernetes persistent volume directly inside the pods. For the logging, having a centralized approach, is very useful for debugging, and what I said about it, in frontend namespace paragraph, continue to be true.

Proxy namespace

The proxy namespaces contains all the proxy pods that manage all the traffic to internet. Generally, in this namespace, we put revese proxy like nginx, apache or haproxy.

This approach has different benefits:

  1. Improve the security blocking the internet connections directly from the business pods.
  2. Centralize outbound traffic to proxies providing better traffic and logging management.
  3. Semplifies the ssl certificate deploy if, for example, some remote service needs a mutual ssl connection or some custom certificate authority. In all these cases, the deploy will be done only in the proxy namespace.
  4. More efficient way to secure and monitor outgoing traffic.

DB namespace

This layer containa all database pods that store data and provide stateful services. These services can be implemented in high availability using kubernetes concept like statefulset or daemonset. For example, in this blog https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/ you can find a good example for configuring a mongo replica set inside kubernetes in GKE. Changing the storage class, for example in ceph storage class, after installing rook ceph inside the cluster, it’s possible to implement if in on premise cluster. Another good useuful example is at https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/, where it’s implemented a stateful set using local persistent volumes that is a improvement of the host path volumes.

Generally this layer should have all the outcoming traffic closed, and should permit all the traffic direct to right ports of the data pods.

Conclusion

I have showed a security segmentation model inside kubernetes cluster leaving out, for simplicity of exposition, some aspects such as management through a ELK or prometheus-grafana that would introduce a monitoring namespace into the infrastructure with its opening to do with correct network policy. Under this perspective, the model could be more complicated adding other namespaces like, for example, a ceph namespace if you want to implement cepk rook inside kubernetes.

Beyond the possible additions that can be made, my goal was to demonstrate how a good network segmentation can be useful, even within a kubernetes cluster, to reduce the attack surface following an intrusion of hackers to the internal network.

LEAVE A COMMENT