In AKS the control plane is considered a fully managed service, operating independently from a series of node pools which run networking and user pods. A cluster must be created with at least one node pool (the default node pool), and others can subsequently be added.
Microsoft recommends that the default node pool be considered tainted, since it runs system pods, and that application pods should run on additional node pools.
Each node pool has its own Kubernetes version, and they can be upgraded independently of one another. The cluster must be upgraded prior to upgrading the node pools.
kubelogin facilitates authentication with AKS using Azure AD. The tools can be installed as follows (be mindful that default paths can be in system locations like
/usr/local and the installer may overwrite files it shouldn't):
az aks install-cli \ --install-location ~/.azure/kubernetes \ --kubelogin-install-location ~/.azure/kubernetes
To install a
kubectl context for a cluster, use the Azure CLI:
$ az aks get-credentials -g my-rg -n my-cluster Merged "aks-dev" as current context in /Users/lukecarrier/.kube/config
AKS uses AAD, exposing Users by UPN and Groups by Object ID.
Addons can be configured at creation time in the Portal and reconfigured with the Azure CLI:
az aks enable-addons -g my-rg -n my-cluster --addons name az aks disable-addons -g my-rg -n my-cluster --addons name
- Dashboard deploys the Kubernetes Dashboard.
- HTTP application routing deploys an Ingress Controller (backed by Azure Application Gateway) and an External DNS Controller backed by Azure DNS.
Note that the Terraform provider (Private) often recreates the resource rather than applying the change directly, though this isn't necessary in the underlying API. Make the changes via the Azure CLI first and then allow Terraform to refresh its state.
To access an Azure Container Registry an
AcrPull role assignment must exist for the
<cluster name>-agentpool Service Principal.
Accessing Azure Key Vault