Requirements for Kubernetes clusters in Azure

Contributors juliantap

You can add and manage managed Azure Kubernetes clusters (AKS) and self-managed Kubernetes clusters in Azure using Cloud Manager. Before you can add the clusters to Cloud Manager, ensure the following requirements are met.

This topic uses Kubernetes cluster where configuration is the same for AKS and self-managed Kubernetes clusters. The cluster type is specified where configuration differs.

Requirements

Astra Trident

The Kubernetes cluster must have NetApp Astra Trident deployed. Install one of the four most recent versions of Astra Trident using Helm. Go to the Astra Trident docs for installation steps using Helm.

Cloud Volumes ONTAP

Cloud Volumes ONTAP must be set up as backend storage for the cluster. Go to the Astra Trident docs for configuration steps.

Cloud Manager Connector

A Connector must be running in Azure with the required permissions. Learn more below.

Network connectivity

Network connectivity is required between the Kubernetes cluster and the Connector and between the Kubernetes cluster and Cloud Volumes ONTAP. Learn more below.

RBAC authorization

Cloud Manager supports RBAC-enabled clusters with and without Active Directory. The Cloud Manager Connector role must be authorized on each Azure cluster. Learn more below.

Prepare a Connector

A Cloud Manager Connector in Azure is required to discover and manage Kubernetes clusters. You’ll need to create a new Connector or use an existing Connector that has the required permissions.

Add the required permissions to an existing Connector (to discover a managed AKS cluster)

If you want to discover a managed AKS cluster, you might need to modify the custom role for the Connector to provide the permissions.

Steps
  1. Identify the role assigned to the Connector virtual machine:

    1. In the Azure portal, open the Virtual machines service.

    2. Select the Connector virtual machine.

    3. Under Settings, select Identity.

    4. Click Azure role assignments.

    5. Make note of the custom role assigned to the Connector virtual machine.

  2. Update the custom role:

    1. In the Azure portal, open your Azure subscription.

    2. Click Access control (IAM) > Roles.

    3. Click the ellipsis (…​) for the custom role and then click Edit.

    4. Click JSON and add the following permissions:

      "Microsoft.ContainerService/managedClusters/listClusterUserCredential/action"
      "Microsoft.ContainerService/managedClusters/read"
    5. Click Review + update and then click Update.

Review networking requirements

You need to provide network connectivity between the Kubernetes cluster and the Connector and between the Kubernetes cluster and the Cloud Volumes ONTAP system that provides backend storage to the cluster.

  • Each Kubernetes cluster must have an inbound connection from the Connector

  • The Connector must have an outbound connection to Kubernetes cluster over port 443

The simplest way to provide this connectivity is to deploy the Connector and Cloud Volumes ONTAP in the same VNet as the Kubernetes cluster. Otherwise, you need to set up a peering connection between the different VNets.

Here’s an example that shows each component in the same VNet.

An architectural diagram of an AKS Kubernetes cluster and its connection to a Connecter and Cloud Volumes ONTAP in the same VPC.

And here’s another example that shows a Kubernetes cluster running in a different VNet. In this example, peering provides a connection between the VNet for the Kubernetes cluster and the VNet for the Connector and Cloud Volumes ONTAP.

An architectural diagram of an AKS Kubernetes cluster and its connection to a Connecter and Cloud Volumes ONTAP in a separate VPC.

Set up RBAC authorization

RBAC validation occurs only on Kubernetes clusters with Active Directory (AD) enabled. Kubernetes clusters without AD will pass validation automatically.

You need authorize the Connector role on each Kubernetes cluster so the Connector can discover and manage a cluster.

Before you begin

Your RBAC subjects: name: configuration varies slightly based on your Kubernetes cluster type.

  • If you are deploying a managed AKS cluster, you need the Object ID for the system-assigned managed identity for the Connector. This ID is available in Azure management portal.

    A screenshot of the system-assigned object ID window on the Azure management portal.

  • If you are deploying a self-managed Kubernetes cluster, you need the username of any authorized user.

Steps
  1. Create a cluster role and role binding.

    1. Create a YAML file that includes the following text. Replace the subjects: kind: variable with your username and subjects: user: with either the Object ID for the system-assigned managed identity or username of any authorized user as described above.

      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
          name: cloudmanager-access-clusterrole
      rules:
          - apiGroups:
                - ''
            resources:
                - secrets
                - namespaces
                - persistentvolumeclaims
                - persistentvolumes
            verbs:
                - get
                - list
                - create
          - apiGroups:
                - storage.k8s.io
            resources:
                - storageclasses
            verbs:
                - get
                - list
          - apiGroups:
                - trident.netapp.io
            resources:
                - tridentbackends
                - tridentorchestrators
            verbs:
                - get
                - list
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
          name: k8s-access-binding
      subjects:
          - kind: User
            name: Object (principal) ID (for AKS) or username (for self-managed)
            apiGroup: rbac.authorization.k8s.io
      roleRef:
          kind: ClusterRole
          name: cloudmanager-access-clusterrole
          apiGroup: rbac.authorization.k8s.io
    2. Apply the configuration to a cluster.

      kubectl apply -f <file-name>