Azure Kubernetes Service - Setting up a Cluster - Part 1

What do we need? An Azure Kubernetes Cluster built with Terraform and setup for az-cli and kubectl to have at it when its done! Easy enough right ;)

The scenario we’re building here is getting a Kubernetes cluster up & running in Azure; and using a Private Docker registry to store and pull our application from. We’ll be building and deploying into the ‘Australia East’ Azure Region.

Series Overview

I call it a ‘series’, but I really only want to break it up into a couple (maybe 3) parts so its easier to write and more concise to read.

So,

  • Part 1 - get Kubernetes cluster up and running on Azure Kubernetes Managed Service (AKS)
  • Part 2 - create a private Docker Registry in the cloud using Azure’s Container Registry Managed service (ACR)
  • Part 3 - deploy a simple application to it.

As usual, we want to automate all the building steps we can. But as you’ll see, there are a few parts of this which (to the best of my current knowledge) couldn’t be automated and was done manually.

Right, let’s go!

Pre-requisites

You will need setup a few things before you can deploy anything to the cloud.

  1. Azure portal account
  2. Azure az-cli (command line interface)
  3. Terraform installed (zipped binary, copy to ~/bin)
  4. (optional) Kubectl installed

Once you have all these in place, proceed to the Azure setup

Azure setup

Once your portal account is created, and you have a valid subscription (Free Tier works), you need to login to link your azure-cli to your account, so from your terminal run this:

$ az login

a browser window opens, you login with your azure portal account..

and then you’ll see something like this:

[email protected]:~$ az login
Note, we have launched a browser for you to login. For old experience with device code, use "az login --use-device-code"
Opening in existing browser session.
You have logged in. Now let us find all the subscriptions to which you have access...

which then spits out some subscription information at you in json

[
  {
    "cloudName": "AzureCloud",
    "id": "0d667072-XXXX-46ef-a5b4-86979fdXXXXX",
    "isDefault": true,
    "name": "Free Trial",
    "state": "Enabled",
    "tenantId": "889cad64-XXXX-410e-b4fd-1cbXXXX537d",
    "user": {
      "name": "[email protected]",
      "type": "user"
    }
  }
]

note this is the output of the following command: $ az account list

next, you need a service principal created to have permissions over your infrastructure

Setup Service Principals for Cluster

You need a Service Principal (sp) setup as Role Based Access Control (rbac) so it has permission and authority over your cluster.

Run this to setup an ‘sp’ for the subscription you see above under ‘id’

$ az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/0d667072-XXXX-46ef-a5b4-86979fdXXXXX" -n "cluster-admin"

what’s happening here:

  • this creates a new service principal (sp) named “cluster-admin”
  • the ‘Contributor’ role is assigned to it
  • this sp has power over the subscription with id=’0d667072-XXXX-46ef-a5b4-86979fdXXXXX’ (Contributor has read/write access)

output example looks like this - first the command throws a notice at you

Changing "cluster-admin" to a valid URI of "http://cluster-admin", which is the required format used for service principal names
Retrying role assignment creation: 1/36

then you get a JSON result

{
  "appId": "475cafbb-b339-40b4-8e69-XXXXXXXXXXX",
  "displayName": "cluster-admin",
  "name": "http://cluster-admin",
  "password": "6f8c2642-df14-XXX-9438-XXXXXXXXXXXXX",
  "tenant": "889cad64-XXX-410e-b4fd-1cbfd002537d"
}

so now you have your ‘sp’ setup, it has the right permissions and you have your appId (client_id) and password (client_secret) to use in terraform for building the cluster.

Terraform setup

As per usual terraform build, we need to configure a few key pieces of infrastructure to be built in our cloud.

Provider (provider.tf)

this one’s easy, exactly the same as ‘aws’, but azure ;)

provider "azurerm" {
  version = "=1.5.0"
}

Resource (resources.tf)

Basic resource name setup here; the more important part of the ‘resource’ section here is the kubeconfig file created by the output of the ‘terraform apply’.

resource "azurerm_resource_group" "k8s" {
  name     = "${var.resource_group_name}"
  location = "${var.location}"
}

# setup to output your kubeconfig from your terraform build
resource "local_file" "kubeconfig" {
  content = "${azurerm_kubernetes_cluster.k8s.kube_config_raw}"
  filename = "./kubeconfig"
}

why is this kubeconfig so important? this is the config file we feed to ‘kubectl’ so that it knows which cluster things are being applied to!

Cluster (cluster.tf)

The meat & potatoes of what we’re building goes here in cluster.tf.

The cluster resource block sets things up in 3 distinct blocks. It starts out with the cluster basics - name, location.

  • The ‘linux_profile’ tells us how the vm is accessed i.e. username and ssh keys
  • The ‘agent_pool_profile’ tells the cluster how many agents to stand up, the type and size of each agent
  • The ‘service_principal’ defines what RBAC style access this cluster will be operating under/with.
  • then you got some tags to finish up… can add more!
#
resource "azurerm_kubernetes_cluster" "k8s" {
  name                = "${var.cluster_name}"
  location            = "${azurerm_resource_group.k8s.location}"
  resource_group_name = "${azurerm_resource_group.k8s.name}"
  dns_prefix          = "${var.dns_prefix}"

  linux_profile {
    admin_username = "ubuntu"

    ssh_key {
      key_data = "${file("${var.ssh_public_key}")}"
    }
  }

  agent_pool_profile {
    name            = "default"
    count           = "${var.agent_count}"
    vm_size         = "Standard_DS2_v2"
    os_type         = "Linux"
    os_disk_size_gb = 30
  }

  service_principal {
    client_id     = "${var.client_id}"
    client_secret = "${var.client_secret}"
  }

  tags {
    Environment = "Sandbox"
  }
}

note: you can query for the client_id but you need to know the client_secret. If these don’t already exist, you need to setup the service principals in the next section.

Variables (variables.tf)

Ok, we need to know what the “${var.}” values in our tf files resolve to, so here they all are in the variables.tf files

variable "dns_prefix" {
  default = "devbox"
}

variable "cluster_name" {
  default = "devbox"
}

variable "ssh_public_key" {
  default = "~/.ssh/azure.pub"
}

variable "agent_count" {
  default = "2"
}

variable "client_id" {
  default = "5bc4c872-b4c9-423f-a155-XXXXXXXXXX"
}

variable "client_secret" {
  default = "6f8c2642-df14-45db-9438-XXXXXXXXXX"
}

variable "resource_group_name" {
  default = "dev_resource_group"
}

variable "location" {
  default = "australiaeast"
}

Output (output.tf)

some debugging output to make sure we’re seeing the kubeconfig details come back to us, and check the host name.

# you will get a file ./kube_config in your terraform directory

output "kube_config" {
  value = "${azurerm_kubernetes_cluster.k8s.kube_config_raw}"
}

output "host" {
  value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.host}"
}

Terraform apply!

Right! party time!

Run your apply and you will be prompted for a ‘yes’, and then terraform will build your cluster and output a ./kubeconfig file for you:

[email protected]:~/repositories/code/terraform (master)$ terraform apply
azurerm_resource_group.k8s: Refreshing state... (ID: /subscriptions/0d667072-8cd2-46ef-a5b4-...e4bd/resourceGroups/dev_resource_group)
azurerm_kubernetes_cluster.k8s: Refreshing state... (ID: /subscriptions/0d667072-8cd2-46ef-a5b4-...ontainerService/managedClusters/devbox)
local_file.kubeconfig: Refreshing state... (ID: 682263c9d17e39c1e7a6c0f2137df741a008783e)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + local_file.kubeconfig
      id:       <computed>
      content:  "apiVersion: v1\nclusters:\n- cluster:\n    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV4ekNDQXErZ0F3SUJBZ0lRQ1krcGdVenVSU3pXTUwvTTdTN0FpakFOQmdrcWhraUc5dzBCQVFzRkFEQU4KTjc0QndYVHhFa3lNd2lvZ1YrcHFzaTZQblM3UEJwOEFodmtuNm9zT1grejRBdktudTI5bzl4MGQveUgKaE41RFZCeXNmK0swRWxXVVZud1lINzRDTUVXYXF1UGRBZjFQCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K\n    server: https://devbox-d4752f36.hcp.australiaeast.azmk8s.io:443\n  name: devbox\ncontexts:\n- context:\n    cluster: devbox\n    user: clusterUser_dev_resource_group_devbox\n  name: devbox\ncurrent-context: devbox\nkind: Config\npreferences: {}\nusers:\n- name: clusterUser_dev_resource_group_devbox\n  user:\n    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUU5ekNDQXQrZ0F3SUJBZ0lSQU52Vnk0NEpOaDQ3NTI0a1lGWW5kUEF3RFFZSktvWklodmNOQVFFTEJRQXcKRtjUWF2cU51SlFrVFF1KzdqQWxYQWZQSzhDVndUMQpKTXd4SUdMbUNtblF6Qmd1cHRicmJPZW1nazBCbG9naXhyWVYKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=\n    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS2dJQkFBS0NBZ0VBczlzdWRydVBqbU4zRW9Rd3BlSEwvWGNsRWV3UjNpZmxlaXJoMmJJek04V0t3OhaemdtbkE3R0pTaHF1eW5VQnY4aktTS3BhZz09Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==\n    token: fddf23225dd54ec1570f3c57e690dbea\n"
      filename: "./kubeconfig"


Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

after you type yes, you get this (heavily cut down but gives you an idea of the output):

local_file.kubeconfig: Creating...
  content:  "" => "apiVersion: v1\nclusters:\n- cluster:\n    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV4ekNDQXErZ0F3SUJBZ0lRQ1krcGdVenVSU3pXTUwvTTdTN0FpakFOQmdrcWhraUc5dzBCQVFzRkFEQU4KTVFzd0NRWURWUVFERXdKallUQWVGdzB4T1RBeE1qZ3dPRFEyTVRsYUZ3MHlNVEF4TWpjd09EUTJNVGxhTUEweApDekFKQmdOVkJBTVRBbU5oTUlJQ0lqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FnOEFNSUlDQ2dLQ0FnRUFyVDFTCkIrTXhKalBlMTVTcTFSQ0tCeWg1MnNmTzhFYnJqQTkvbTlBODFpa3hWSFZ4ekZJL2tsWlYwMVZLOTFDaWVYTisKT3Ydra1ByNW5ZVHhkcktIajRyeHRySGFVa29VMW5vOXBUUnhPTlVyCnQxb1lSQWF6bFhwVUVFM1pNS0E0eStPQWxjR0NJK2RCbHhDL1NhMzN2VXBPL1FXR2p0RDhtWDRad3F1ZHZqbHAKcW93QWNiZVhWNWVtSkZpZEt2aGlscnRFUjMyQjBIOU5OUGJVbWhXWkZZWWEwSm5rSjk2RTBCWmpadGhLWHY1UQp0OENXT09mUU1OaUFPRFNvQjhCMzNldWxRTSt4anFKVGM3SXZldCtHc1Rxa1AvN1JZSlVRT1NqWG5RMWtFRnYzCkxwTkc1Njc0QndYVHhFa3lNd2lvZ1YrcHFzaTZQblM3UEJwOEFodmtuNm9zT1grejRBdktudTI5bzl4MGQveUgKaE41RFZCeXNmK0swRWxXVVZud1lINzRDTUVXYXF1UGRBZjFQCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K\n    server: https://devbox-d4752f36.hcp.australiaeast.azmk8s.io:443\n  name: devbox\ncontexts:\n- context:\n    cluster: devbox\n    user: clusterUser_dev_resource_group_devbox\n  name: devbox\ncurrent-context: devbox\nkind: Config\npreferences: {}\nusers:\n- name: clusterUser_dev_resource_group_devbox\n  user:\n    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUU5ekNDQXQrZ0F3SUJBZ0lSQU52Vnk0NEpOaDQ3NTI0a1lGWW5kUEF3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdIaGNOTVRrd01USTRNRGcwTmpFNVdoY05NakV3TVRJM01EZzBOakU1V2pBcQpNUmN3RlFZRFZRUUtFdzV6ZVhOMFpXMDZiV0Z6ZEdWeWN6RVBNQTBHQTFVRUF4TUdZMnhwWlc1ME1JSUNJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQWc4QU1JSUNDZ0tDQWdFQXM5c3VkcnVQam1OM0VvUXdwZUhML1hjbEVld1IKM2lmbGVpcmgyYkl6TThXS3c5bFRWKzdJTVRFSXhzeDJSNkNBcjRnR0VreTh3ZGpaMll3aXg0MmIyZWN1MmZaZwpzS1d5T29YT21pU2VXUlJXNDZCTE1oYzJyaTFiTXNMMFZLT1hjbkJ2TTJNSnB4V0lLenNFWUtTWEk2b3o1cEpDUlgrSDB4Tk00Ck9EVFF3ZGZCNVJRb2VzY0s4VVFOQk9qQ3plQ083Rm9uYWh1WExBOE5kY3BOQ3pZblgvTitHUEhBQmZRRnpQSTQKNHFLbCtaekoyQzR1WHkvOC8wU1drK0R3VnA2RXNIeUFPaytjUWF2cU51SlFrVFF1KzdqQWxYQWZQSzhDVndUMQpKTXd4SUdMbUNtblF6Qmd1cHRicmJPZW1nazBCbG9naXhyWVYKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=\n    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS2dJQkFBS0NBZ0VBczlzdWRydVBqbU4zRW9Rd3BlSEwvWGNsRWV3UjNpZmxlaXJoMmJJek04V0t3OWxUClYrN0lNVEVJeHN4MlI2Q0FyNGdHRWt5OHdkaloyWXdpeDQyYjJlY3UyZlpnc0tXeU9vWE9taVNlV1JSVzQ2QkwKTWhjMnJpMWJNc0wwVktPcFhoNUk2Q1QrNzVsT1U4T2mZHIrQ2VwelRPNHlweUppUjQrWjZ2CmxnNFAwN0YxVUJyMWR0ZXAwclQ4ZHkycXEwNUZ3MGxDZi93YVlZcUx1TXEyd0gwSC9QbEx1a1M2L041STEvQncKa3dVbXZIRXFsSTBVdHZmSkJ6NWlIMUpvOUFzT09VbDdGUm52YnNZRE5IR1ZIMHJtU0N5WUFFKzQ2VGJwMHkrTApIVmg0TzlYWFBzQVhheTNxSzlqTzkvUEFwbXhYbmRQUHg3V3haemdtbkE3R0pTaHF1eW5VQnY4aktTS3BhZz09Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==\n    token: fddDDDDDDDDDDDDDDDDDDe690dbea\n"
  filename: "" => "./kubeconfig"
local_file.kubeconfig: Creation complete after 0s (ID: 682263c9d17e39c1e7a6c0f2137df741a008783e)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

host = https://devbox-d4752f36.hcp.australiaeast.azmk8s.io:443

and then you’ll see your new ./kubeconfig file in the current directory

[email protected]:~/repositories/code/terraform (master)$ ls -ltr
total 228
-rw-rw---- 1 darthvaldr darthvaldr    44 Jan 18 13:19 provider.tf
-rw-rw---- 1 darthvaldr darthvaldr   739 Jan 18 13:19 cluster.tf
-rw-rw---- 1 darthvaldr darthvaldr   485 Jan 28 21:45 variables.tf
-rw-rw---- 1 darthvaldr darthvaldr   249 Jan 28 22:15 resources.tf
-rw-rw---- 1 darthvaldr darthvaldr   174 Jan 28 22:19 output.tf
-rwxrwx--- 1 darthvaldr darthvaldr  9462 Jan 28 22:20 kubeconfig
-rw-rw---- 1 darthvaldr darthvaldr 33614 Jan 28 22:20 terraform.tfstate.backup
-rw-rw---- 1 darthvaldr darthvaldr 33614 Jan 28 22:20 terraform.tfstate

SUCCESS!

Verify with ‘az’ or ‘kubectl’

Now remember how I mentioned the ‘./kubeconfig’ was an important file to have? You now need it to do a few basic query to check out your brand new Azure Kubernetes Cluster.

check it with ‘kubectl’

[email protected]:~/repositories/code/terraform (master)$ kubectl --kubeconfig ./kubeconfig get nodes
NAME                     STATUS   ROLES   AGE   VERSION
aks-default-57560735-0   Ready    agent   38m   v1.9.11
aks-default-57560735-1   Ready    agent   38m   v1.9.11
[email protected]:~/repositories/code/terraform (master)$ kubectl --kubeconfig ./kubeconfig describe node aks-default-57560735-0
Name:               aks-default-57560735-0
Roles:              agent
Labels:             agentpool=default
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=Standard_DS2_v2
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=australiaeast
                    failure-domain.beta.kubernetes.io/zone=0
                    kubernetes.azure.com/cluster=MC_dev_resource_group_devbox_australiaeast
                    kubernetes.io/hostname=aks-default-57560735-0
                    kubernetes.io/role=agent
                    node-role.kubernetes.io/agent=
                    storageprofile=managed
                    storagetier=Premium_LRS
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 28 Jan 2019 22:03:42 +1300

check it with ‘az aks’

[email protected]:~/repositories/clearpoint/code/terraform (master)$ az aks list -o table
Name    Location       ResourceGroup       KubernetesVersion    ProvisioningState    Fqdn
------  -------------  ------------------  -------------------  -------------------  -------------------------------------------
devbox  australiaeast  dev_resource_group  1.9.11               Succeeded            devbox-d4752f36.hcp.australiaeast.azmk8s.io

or just login to your azure portal and have a look at it in the GUI

azure portal new cluster

In Part 2 we will create an Azure Container Registry using 2 different methods, in which to store our application images ready for deployment.

References

Leave a Comment