Antrea Integration for vSphere 8 with Tanzu

by

Integration of NSX ALB for K8s as ingress controller based on AKO (AVI Kubernetes Operator).

This article helps you to understand how to integrate the vSphere with Tanzu guest clusters with NSX, if you are using Antrea as CNI.

Additionally, you will go over the advantages of the Antrea integration for NSX.

Why integrating Antrea with NSX?

For Tanzu you can do the deployment with the integration for vSphere networking including NSX ALB or the integration with NSX-T.

For the NSX-T integration the NSX-T basic LoadBalancer was used in all versions prior vCenter 8 update 2.

If you are using the following versions, you can also use NSX ALB in combination with the NSX-T integration for vSphere with Tanzu.

  • NSX-T: 4.1.1 or later
  • NSX ALB: 22.1.4 or later
  • vCenter: 8u2 or later

For the Antrea integration with NSX it is obviously necessary to use the integration with NSX-T for your vSphere with Tanzu deployment. As default after you deployed vSphere with Tanzu based on the NSX-T integration, you already have an integration between the Tanzu supervisor cluster and NSX-T done by NCP (NSX Container Plugin). At this point NSX-T will be used by the supervisor cluster to create new T1 routers, segments and LoadBalancing services for newly created K8s Clusters or K8s services. The created K8s clusters which are running as Tanzu guest clusters do use Antrea as CNI, but do not have any integration with NSX-T so far.

The integration of Antrea with NSX-T will add the following benefits.

  • The K8s Clusters, Namespaces, Pods and Services will be visible as object within the NSX-T inventory.
  • The assigned K8s labels for mentioned K8s objects can be used to create security groups within NSX-T.
  • DFW Rules can be applied to the mentioned K8s objects to protect PODs or services within a K8s cluster from reaching each other.
  • DFW Rules can be created either from the NSX-T Manager or from the K8s API managed by the authorized K8s users. The DFW rules created by the NSX-T Manager does have an higher priority and cannot be overwritten by K8s.
  • Within the NSX-T Manager you can use the Traceflow feature to check which firewall rule is blocking a specific communication. Antrea will also add this feature for the communication within the K8s Cluster which are protected by DFW Rules or Antrea Cluster Network Policies.

Permissions required within NSX-T

For the integration between Antrea and NSX-T you need a “Principal Identity User” within NSX-T. For the integration to work it is mandatory to assign the “Enterprise Admin” role for the “Principal Identity User”.

Any other roles available for NSX-T are not sufficient to get the integration with Antrea working.

This requirement might be a problem in some use cases if the provider for NSX-T is not the admin for the K8s Cluster which needs to be integrated, since the user will deliver full permissions.

A solution could be the new NSX-T feature “NSX projects” introduced in NSX 4.1, but I did not find any information this feature is currently supported for the integration between Antrea and NSX-T.

Integration of Antrea with NSX-T

In this blog I will assume you already know how to create a TKG cluster with Antrea as CNI.

As preparation for the Antrea integration with NSX-T you need to download the “VMware Container Networking with Antrea, NSX Interworking Adapter Image and Deployment Manifests“, which is available in the VMware Customer Connect portal under “VMware Container Networking with Antrea”.

Inside the download package you will find the following files.

  • bootstrap-config.yaml
  • deregisterjob.yaml
  • interworking-debian-0.11.0.tar
  • interworking.yaml
  • inventorycleanup.yaml
  • ns-label-webhook.yaml
  • antreansxctl.tar.gz

The file “bootstrap-config.yaml” needs to be changed based on your environment and the previously created Principal Identity User.

Within the file you just need to change the values for the following parameters.

  • NSXManagers
  • clusterName (Name of your K8s cluster)
  • tls.cert
  • tls.key

For the TLS key and cert you need the content in one line base64 encoded, which can be done by the following commands.

<path to cert> | base64
<path to key> | base64

Afterwards you should check the images used in the files “interworking.yaml” and “deregisterjob.yaml”, since these repos might be unreachable.

To prevent issues with the availability of the images you should use the VMware public repo shown below.

image: projects.registry.vmware.com/antreainterworking/interworking-photon:<image version>

 For version 0.11.0 the path to the imaged will be “projects.registry.vmware.com/antreainterworking/interworking-photon:0.11.0”.

Now you are ready for the deployment which can be done with the following two commands.

kubectl create clusterrolebinding privileged-role-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated

kubectl apply -f bootstrap-config.yaml -f interworking.yaml

The cluster role binding should be set to allow the imaged to be deployed with your Kubernetes cluster, the apply of the “boostrap-config.yaml” and the “interworking.yaml” will start the registration of your Antrea CNI with the NSX-T Manager.

After the deployment is started you can check the status with the following two commands.

k -n vmware-system-antrea get jobs
NAME       COMPLETIONS   DURATION   AGE
register   1/1           90s        2m35s

k -n vmware-system-antrea get pods
NAME                            READY   STATUS      RESTARTS   AGE
interworking-6edc37756a-ktzcd   4/4     Running     0          2m30s
register-gzn5s                  0/1     Completed   0          2m30s

After the registration is successfully done, you can also check the status in the NSX-T Manager. In the screenshot below you can check the status of the relevant components “Antrea Controller”, “Management Plane Adapter” and “Central Control Plane Adapter”. This information is shown for each K8s Cluster which is integrated through the Antrea CNI and can be found under System → Fabric → Nodes → Container Clusters → Antrea.

Antrea CNI Integration status
Antrea CNI Integration status

In addition, you can see the different objects deployed in the K8s clusters as well as the K8s cluster itself and the used CNI. These information can be found under Inventory → Containers.

K8s cluster inventory based on Antrea and NCP
K8s cluster inventory based on Antrea and NCP

You are also able to see the existing K8s namespaces and in which K8s cluster they are deployed.

K8s namespace inventory based on Antrea and NCP
K8s namespace inventory based on Antrea and NCP

Antrea Cluster Network Policy and NSX DFW

After the integration between Antrea and NSX-T is successfully done, you can create Distributed Firewall rules which will be realized as “Antrea Cluster Network Policies” in the K8s Cluster.

Rules created by the NSX-T Manager will be visible from the NSX-T Manager UI or API, and will be synced to the K8s cluster as “Antrea Cluster Network Policies”. It is also possible to create “Antrea Cluster Network Policies” from the K8s API, but these rules will not be visible within the Distributed Firewall.

This lack of overview is a disadvantage, but you are still able to see these rules by the Traceflow feature of NSX-T and can see by which “Antrea Cluster Network Policiy” a communication is either allowed or blocked.

Further the policies created by the NSX-T do have a higher priority than the policies created by the K8s API. That means you can define policies to ensure the guidelines within your environment which cannot be overwritten by the K8s users or admins.

As an example, I created a DFW policy for my K8s Cluster “k8s-cl-test1” and one DFW rule to decide if the communication between two pods will be allowed. In the screenshot below you can see a policy applied to the previously mentioned K8s cluster and a rule with the “frontend” Pod as source of the communication. This rule is also applied to the redis database Pod. Based on this rule an input “Antrea Cluster Network Policy” is created.

DFW Rule for K8s cluster by Antrea CNI integration
DFW Rule for K8s cluster by Antrea CNI integration

Note: If you want to create an output rule, you have to specify the Destination of the communication instead of the source. You cannot configure source and destination in one rule, since a “Antrea Cluster Network Policy” can be either an output or input rule. You can limit the input rule to a target Pod by defining the objects where this rule should be applied, as done in the example above.

Further I created “Antrea Cluster Network Policy” from the K8s API based on the following YAML file.

apiVersion: security.antrea.tanzu.vmware.com/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: acnp-with-stand-alone-selectors
spec:
    priority: 5
    tier: securityops
    appliedTo:
      - podSelector:
          matchLabels:
            app: redis-cart
    ingress:
      - action: Allow
        from:
          - podSelector:
              matchLabels:
                app: frontend
        ports:
          - protocol: TCP
            port: 6379
        name: AllowFromFrontend
        enableLogging: false

Based on the NSX Traceflow feature you can check which rule will be used to either allow or block the communication.

For the first test the DFW rule will be enabled and the policy created by the K8s API is also active, which can be checked with the command “kubectl get acnp -A” and shows the following output in our example.

kubectl get acnp -A
NAME                                   TIER                       PRIORITY             DESIRED NODES   CURRENT NODES   AGE
a9e8144d-b7f3-4e2e-9919-e6ee421c1551   nsx-category-application   1.0000000532907336   1               1               6m24s
acnp-with-stand-alone-selectors        securityops                5                    1               1               34d

The “Tier” shows if the rule is applied within one of the NSX DFW categories like “nsx-category-application” or in any other category of the defined by K8s like “securityops”.

If both policies are created the NSX DFW rule do have the higher priority, as shown in the following screenshot of the NSX Traceflow feature.

The screenshot below shows the packet flow and the “Network Policy ID” which was used to allow the communication.

Packet flow based on NSX-T Traceflow with DFW Rule enabled
Packet flow based on NSX-T Traceflow with DFW Rule enabled

Based on the “Network Policy ID“ you can find the details about the applied rule including the detailed configuration.

Detailed output of Antrea Cluster Network Policy created by NSX-T
Detailed output of Antrea Cluster Network Policy created by NSX-T

Now I disabled the DFW rule created in NSX and the communication will be filtered by the rule created through the K8s API, as shown below.

Packet flow based on NSX-T Traceflow with DFW Rule disabled
Packet flow based on NSX-T Traceflow with DFW Rule disabled

In the following screenshot you can see that the used rule is not part of the categories available in NSX.

Detailed output of Antrea Cluster Network Policy created by K8s
Detailed output of Antrea Cluster Network Policy created by K8s

Some thoughts

From my point of view this is a very good way to secure the K8s workload, also within the K8s cluster. The most valuable feature is that you can see which rule is responsible to allow or block a specific communication and if this rule is either managed by NSX or K8s.

Further it is very important from security point of view to separate the responsibility for the enforcement points of the network and application security. To enforce this responsibility and prevent anyone to ignore these responsibilities it is also important to make sure no one of the foreign teams can change anything they are not allowed to change.

For example, it would be critical if a K8s Admin is able to change or overwrite a firewall rule which is managed by the overall security or network team and is based on a company regulation.


I am very happy that these problems are addressed, since “Antrea Cluster Network Policies” can be either created by the NSX Admin oder K8s Admin, but the K8s Admin is not able to overwrite or change the rules created by the NSX Admin. Further the rules created by the NSX Admin does have a higher priority.

Leave a Reply

Your email address will not be published. Required fields are marked *