Hashicorp Vault on Kubernetes with a MySQL and Django Application

8 June 2020
Post to LinkedIn Post to LinkedIn Post to Twitter | TL;DR

Securely deploying an application in Kubernetes is just as important as using Kubernetes at all, perhaps even more.
With Kubernetes scaling and deploying applications is fully automated, meaning we need to provide credentials automated and securely. Using environment variables to configure your passwords is not secure! 
This article will show how to create a local Kubernetes cluster using Minikube and deploy a 2-tier application with secure credentials using Hashicorp Vault, MySQL 8, and The Django Web-Framework.

If you are lazy and know what you are doing you can flip the TLDR switch at the top and get the "Too Lazy; Didn't Read" version of this article, switch it back when you are unsure. 

When I created my first Docker container my main concern was security.
Hardcoded credentials are no big no-no for me as they can end up in your git repository and configuring them as environment variables exposes them to log files. Which got me looking into HashiCorp Vault. 
With Vault you can do credentials the only correct way: using dynamic credentials! Static credentials always get leaked one way or another. With dynamic credentials they will be unusable after their lease time.

In this article we use Vault to provide credentials to our container in a secure way.
We deploy Vault in our Kubernetes cluster and use a sidecar agent to provide credentials to our MySQL container as a credential file. The Kubernetes service account that runs the MySQL container has access to the credential.
This credential is dynamic and will rotate every 24 hours, if it gets leaked no one can use it after that - period.

$ Kubernetes with Minikube on Windows 10

To deploy Kubernetes I choose Minikube over Docker for Windows because I believe it's more mature, it has more options for troubleshooting and configuration and I found the installation and removal process very streamlined.
Here is an excellent article describing the pros and cons of both in case you want to use Docker for Windows.

Please install chocolatey first, it's the most used command-line package manager for windows until Microsoft finally releases winget-cli.

#install Chocolately

Set-ExecutionPolicy Bypass -Scope Process -Force
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

#install Minikube, Helm and Docker-cli

choco install minikube kubernetes-helm docker-cli -y

By default, Minikube creates a VM running Linux with 2G memory, 2 CPUs, and a 20GB disk (thin provisioned).
I've tried running with less memory but that resulted in unexpected crashes. If you really have to use less memory 1500MB is the absolute minimum for the VM to start.

You have to choose which Hypervisor you prefer to use. I've tried Hyper-V, VirtualBox, and VMWare Workstation.
On my laptop with only 4GB memory, I found that VMWare workstation performed best. Also, I had it already installed and prefer to use it for my other VM's. If you don't have VMWare installed I would recommend using Hyper-V as it's easy to install, free, and has the best integration with Windows 10.

#start Kubernetes

minikube start --driver=[hyperv|vmware|virtualbox]

Go ahead and launch the dashboard, we will never use it again but hey, it's there and now you know it.

minikube dashboard

For the rest of the project, we need config files for Kubernetes and a Django container to deploy

#download the Kubernetes & Helm config files and the Django container to deploy. 

git clone https://github.com/baelen-git/django-borisaelen.git 

#change directory, this will be our working directory for further reference

cd django-borisaelen

$ HashiCorp Vault & Consul 

Vault will manage our credentials and Consul will be used as the storage backend for Vault.
We choose Consul as a storage backend because it provides an easy snapshot backup solution for our data.
Make sure you watch the excellent introduction videos about Consul and Vault from Hashicorp.

Just like all other Hashicorp products, Vault and Consul are API-driven and come with a very lightweight single binary server. You can run the Vault server in 3 modes: Dev, Standalone, and HA.
Vault provides support for different storage backends, however, you should keep in mind that your storage backend is responsible for the backup of your data, Vault does not provide this feature.
We will use HA mode because it uses Consul storage backend by default. The Vault and Consul servers are so lightweight I didn't bother running 3 of them, giving a nice representation of a production environment.
Dev mode will run completely in memory and doesn't offer any data persistency so don't use that.
Standalone mode will use a filesystem storage backend by default but that makes backups complicated.
If you have a database or cloud storage with back-up available to you, you might want to use that instead of Consul.

We will install Vault and Consul using Helm.
Helm is a Package Manager for Kubernetes. It provides an easy to use values file for customization and will generate a Kubernetes configuration based on templates provided by Hashicorp. 
We've already installed Helm with Chocolately earlier.

Have a look at the kubernetes\helm-consul-values.yaml file: (Click here for all possible values)

global:
  datacenter: laptop

client:
  enabled: true

server:
  enabled: true
  storage: "1Gi" #(default=10Gi)

  affinity: "" #(default = podAntiAffinity) Disable 1 pod per node affinity for 3 replicas on 1 node

ui: 
  service:
    type: "LoadBalancer" #expose Consul to your Network.

Now have a look at the kubernetes\helm-vault-values.yaml file: (Click here for all possible values)

server:
  affinity: "" #make sure you can run 3 replicas on 1 node
  ha:
    enabled: true
  service:
    type: "LoadBalancer" #to access the vault from the PC host

Based on these values Helm will generate a Kubernetes deployment for Vault and Consul.

#install consul and vault

helm install consul --values .\kubernetes\helm-consul-values.yaml https://github.com/hashicorp/consul-helm/archive/master.tar.gz
helm install vault --values .\kubernetes\helm-vault-values.yaml https://github.com/hashicorp/vault-helm/archive/master.tar.gz

In case you already have Vault running elsewhere in your environment you only have to install the Vault Injector Agent. Mind you that it is recommended to setup a Service in Kubernetes to point to your vault address for flexibility. (use this example)

helm install vault --set "injector.externalVaultAddr=http://<vault-service-name|vault-hostname>:8200" https://github.com/hashicorp/vault-helm/archive/master.tar.gz

Check if all pods are running

kubectl get pod

You should see the following output

NAME                                    READY   STATUS             RESTARTS   AGE
consul-consul-cwbz8                     1/1     Running            2          5d20h
consul-consul-server-0                  1/1     Running            2          20m
consul-consul-server-1                  1/1     Running            2          58s
consul-consul-server-2                  1/1     Running            2          58s
vault-0                                 0/1     Running            2          5d20h
vault-1                                 0/1     Running            2          5d20h
vault-2                                 0/1     Running            2          5d20h
vault-agent-injector-7cf6975f99-tcvt2   1/1     Running            2          5d20h

Initializing the Vault will generate your keys and a root token that can be used to unseal and login to the Vault. For our demo purpose we will create 1 key and require only 1 key to unseal the vault. We will save this key to a file that is stored in my Google Drive. DO NOT USE THIS IN PRODUCTION. Hashicorp advises at least 5 keys which will be kept save in 5 different locations and a threshold of 3 keys to open the vault. 

#wait for all pods to be running and then initialize the vault

kubectl exec vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > cluster-keys.json

#unseal the vault will open them for requests.

$keys = Get-Content '.\cluster-keys.json' | Out-String | ConvertFrom-Json
kubectl exec vault-0 -- vault operator unseal $keys.unseal_keys_b64
kubectl exec vault-1 -- vault operator unseal $keys.unseal_keys_b64
kubectl exec vault-2 -- vault operator unseal $keys.unseal_keys_b64

We will access our Vault using 1 of the Vault servers as a client. To login with the credentials stored in the $keys variable, we will issue a login command to save the root token to a file inside the vault-0 container. With the second command we start an interactive terminal (-it) session with the Shell on our vault-0 container.  

#login to the vault 

kubectl exec -it vault-0 -- vault login $keys.root_token 
kubectl exec -it vault-0 -- /bin/sh

Create a static root credential for the initial installation of MYSQL which will be rotated by Vault later.
As a failsafe MYSQL will still allow root login with this initial root password but only via localhost, therefor we should store this password in our Vault.

#enable the Key/Value secret engine to store simple Key/Value pairs and create a static root credential for MYSQL. 

vault secrets enable -path=secret kv-v2
vault kv put secret/borisaelen-website/database mysql_rootpw="1ns3rt_y0ur_0wn_p@ssw0rd"

#vault provides access to secrets using policies. Create a policy to read the MySQL root password

vault policy write borisaelen-website-ro - <<EOF
path "secret/data/borisaelen-website/database" {
    capabilities = ["read"]
}
EOF

Making this password available inside our container requires us to enable the Kubernetes authentication method in Vault. Vault will communicate with Kubernetes using the token of the service account that is running the Vault containers. Conveniently Kubernetes has made this token (and a ca_cert) available inside the vault container under /var/run/secrets This means we can create our config from the vault container with this simple command

#enable Kubernetes authentication in Vault

vault auth enable kubernetes
vault write auth/kubernetes/config \
    token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
    kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
    kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

#create a Kubernetes Authentication role to map the Kubernetes service account with the vault policy

vault write auth/kubernetes/role/borisaelen \
    bound_service_account_names=borisaelen-sa \
    bound_service_account_namespaces=default \
    policies=borisaelen-website-ro \
    ttl=24h

$ MySQL Containers with Vault support

I've created a Kubernetes deployment config which you can find in kubernetes\mysql-deployment.yaml.
Please open this file to understand the deployment specifications.

In this file you will find the following information;
To access our secret inside the container we are running the container under the "borisaelen-sa" service account.
We will create a Kubernetes service account using this code.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: borisaelen-sa

We are running the container under this service account with this code

apiVersion: apps/v1 
kind: Deployment
spec:
    spec:
      serviceAccountName: borisaelen-sa

We added the following annotations to our deployment to get the credentials from the vault. 

apiVersion: apps/v1 
kind: Deployment
spec:
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/role: "borisaelen"
        vault.hashicorp.com/agent-inject-secret-mysql_rootpw: 'secret/borisaelen-website/database'
        vault.hashicorp.com/agent-inject-template-mysql_rootpw: | 
            {{- with secret "secret/borisaelen-website/database" -}}
            {{ .Data.data.mysql_rootpw }}
            {{- end }}

We try to assume the "borisaelen" role that was created earlier.
With this role applied we will try to access the root password under its specified path. 
We use a template to print the password to the password file.

The last step is to tell MySQL to use the MYSQL_ROOT_PASSWORD_FILE.
You can do this by passing the following environment variable to your container

apiVersion: apps/v1 
kind: Deployment
spec:
  template:
    spec:
      containers:
      - image: mysql:8
        env:
        - name: MYSQL_ROOT_PASSWORD_FILE
          value: /vault/secrets/mysql_rootpw

#deploy your MYSQL application with root password for localhost 

kubectl create -f  '.\kubernetes\mysql-deployment.yaml'

#wait for all pods to be running and login to your mysql container

kubectl exec deploy/borisaelen-mysql -c mysql -ti -- /bin/bash

#create a database and a user for the Django container with a temporary password. This password will be rotated by Vault.

mysql -uroot -p`cat /vault/secrets/mysql_rootpw`
CREATE DATABASE borisaelen CHARACTER SET utf8;
CREATE USER 'django'@'%' IDENTIFIED BY 'temppw';GRANT ALL PRIVILEGES ON borisaelen.* TO 'django'@'%';

From version 8.0 MySQL has switched to caching_sha2_password authentication. In case your client doesn't have caching_sha_512 support you should create the Django user with mysql_native_password authentication. If you are looking for the correct apk package in the Alpine docker container you should install the mariadb-connector-c-dev package

CREATE USER 'django'@'borisaelen-django' IDENTIFIED WITH mysql_native_password BY 'temppw';GRANT ALL PRIVILEGES ON borisaeleb.* TO 'django'@'borisaelen-django';

$ Using Dynamic database credentials for MySQL

Time to rotate our database credentials! Let's login to our vault again

kubectl exec -it vault-0 -- vault login $keys.root_token
kubectl exec -it vault-0 -- /bin/sh

The username and password you specify when we create our database config will be used to rotate passwords and/or add dynamic users. It is recommended by Hashicorp to use a dedicated vault user for this. I didn't do this, as I want to rotate my root password. Please be advised that after rotating the root password cannot be retrieved from the vault anymore, only Vault knows the password! You will create new dynamic users from Vault to grant root access to other persons or services.

#configure Vault Database credentials 

vault secrets enable database
vault write database/config/borisaelen-website \
plugin_name=mysql-database-plugin \
connection_url="{{username}}:{{password}}@tcp(borisaelen-mysql:3306)/"  \
allowed_roles="*" \
username="root" \
password="1ns3rt_y0ur_0wn_p@ssw0rd"

#rotate the password. Now only Vault has access remote access to your root account and it's impossible to get the password. (You still have a failsafe root login via localhost)

vault write -force database/rotate-root/borisaelen-website 

With Vault you can create a new user and password every time an application requests access to the database, this is called Dynamic Credentials. Dynamic credentials will have a certain TTL and will be created with the creation statement that you specify. This way you can create roles for different users that want to access the database like; readonly, operator or superuser. We will create a superuser role for all databases because we can't (or want to) use our root account for that. 

#create a Dynamic superuser account

vault write database/roles/superuser \
db_name=borisaelen-website \
creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT ALL PRIVILEGES ON *.* TO '{{name}}'@'%' WITH GRANT OPTION;" \
default_ttl="1h" \
max_ttl="24h"

#write the allowed_roles again, this is a weird bug

vault write database/config/borisaelen-website allowed_roles="*"

#get the credentials of the dynamic superuser 

vault read database/creds/superuser

Notice here that Vault has created a new user in the database with a random password. Any application that wants superuser access just needs to authenticate itself with Vault and issue the above read command to receive its credentials. Dynamic Credentials are a great solution 

#create a static user, this one is for Django

vault write database/static-roles/django-admin \
    db_name=borisaelen-website \
    username="django" \
    rotation_period=86400

#add this role to your policy for the Kubernetes authentication with the service account

vault policy write borisaelen-website-ro - <<EOF
path "secret/data/borisaelen-website/database" {
    capabilities = ["read"]
}
path "database/static-creds/django-admin" {
    capabilities = ["read"]
}
EOF

#validate if the user got a new password

vault read database/static-creds/django-admin

$ Django Container

Django Web Framework is a framework for developing web applications in Python.
I've used it to create my own website and put it inside a Docker container.

You will find much documentation and excellent tutorials on the website of Django however using my GitHub repository might be the fastest way to get started. It will provide you with a blog application and all the files required to build a container and deploy this to Kubernetes using Vault credentials. If you have your own container environment you can use that, otherwise, use my git repository. 

#set the correct env variables to use the docker daemon inside Minikube

& minikube docker-env | Invoke-Expression

You need to create a local_setting.py file that contains a secret_key for Django and your Debug settings. Click the link to find out what that Secret Key is used for.

#create a local_settings.py from the template and then edit the file.

cp .\borisaelen\local_settings-template.py .\borisaelen\local_settings.py

#build your container. This will send your working directory to the Docker environment inside Minikube. Docker will then create a container image with the tag "borisaelen-website".

docker build -t borisaelen-website .

#deploy your Django website

kubectl create -f '.\kubernetes\django-deployment.yaml'

#launch the website

minikube service borisaelen-django

$ Protecting your Vault data

It is good practice to back up your Vault data, even in your own LAB environment.
It's quite easy to create a snapshot from your Consul data, you can copy this file up to any location you want.
We will ask Minikube what is the URL of the LoadBalancer that offers the consul service and add the /v1/snapshot path to that for our API request. 

#create a consul snapshot with the GET method

$consul_snapshot_uri =  (minikube service consul-consul-ui --url=true) + "/v1/snapshot" 
Invoke-WebRequest -Method GET -OutFile .\backup_vault$((Get-Date).ToFileTimeUTC()).gz -Uri $consul_snapshot_uri

#restore a consul snapshot with the PUT method on the same URI

Invoke-WebRequest -Method PUT -InFile .\backup_vault.gz -Uri $consul_snapshot_uri

$ Useful documentation

$ Troubleshooting 

When you want to know what is happening inside a container, check the logs!

kubectl logs pod/<podname>

You can even do that using labels, not having to copy/paste the generated Podname

kubectl logs -l tier=frontend

When a pod isn't creating for some reason you should try to describe the Pod or the Deployment

kubectl describe pod/<podname>
kubectl describe deployment/<deploymentname>

$ Thank you!

Thank you for reading my article, I hope you found what you were looking for.
If you have any questions feel free to post a comment. 



$ Comments

Leave a comment