Knative demo using Gloo

Kubernetes on Google cloud to exercise Knative using Solo.io’s Gloo.

WIP

WIP

This page is still being written… pardon the dust!!!

WIP

WIP

I was asked to talk about Knative and decided to try out Google cloud’s GKE with service mesh from solo.io instead of Istio.

Prerequisites

In order to make this easy to reproduce, I am using a virtual machine provisioned by Vagrant as the interface to the Kubernetes cluster. So, start off by making sure you have these installed in your system:

  • Hypervisor (compatible with Vagrant)
  • Git
  • Vagrant

Provisioning

$ KND='knative_demo.git' ; \
git clone git@github.com:flavio-fernandes/$KND $KND

$ cd $KND && \
git clone git@github.com:flavio-fernandes/flaskapp.git flaskapp.git

Edit the files provisioning/ssh_config and Vagrantfile

This is only needed if you are interested in pushing changes from the VM to your git repository. I needed that to show you how to make Knative build from source in the demo screencast.

As part of provisioning the VM, Vagrant will copy a specific ssh key file for pushing changes into Github. You may choose to skip that by commenting out these lines. Otherwise, tweak them to point to the proper file in your environment.

Optional: Edit the file provisioning/git_config

This is only needed if you did not comment out the Vagrantfile section mentioned above. Change the user section to make this you.

Start VM

# cd to where you cloned knative_demo.git and boot vm
$ time vagrant up  ; # this takes about 5 minutes

# To save snapshot. You can create multiples of these.
$ vagrant snapshot save freshAndClean1

# If you ever need to restore from snapshot:
$ vagrant snapshot restore --no-provision freshAndClean1

Authenticate with your Google Cloud Account

At this point, ssh into the VM and follow the steps needed to manage a cluster in GKE

$ vagrant ssh

# From inside VM

$ cat << EOT >> /home/vagrant/.bashrc_me
export PROJECT=knative-proj
export CLUSTER_NAME=knative1
export CLUSTER_ZONE=us-east1-d
EOT

$ source /home/vagrant/.bashrc_me ; echo $PROJECT

# These commands will setup the needed files in the VM
# to connect you to your Google account. Simply copy and paste
# the verification code as mentioned in the instructions
$ gcloud auth login  && \
gcloud auth application-default login

# If the project does not exist yet, create it and link it to
# your billing via the Google console
$ gcloud projects create $PROJECT --set-as-default
# On browser, open https://console.cloud.google.com/
# Select Billing ==> Link billing account to project

# Back in VM shell, do these final commands to store the
# Goggle project settings
$ echo $PROJECT ; gcloud config set core/project $PROJECT && \
gcloud config set compute/zone $CLUSTER_ZONE

# Enable services. This can take a minute to complete...
$ gcloud services enable cloudapis.googleapis.com && \
gcloud services enable container.googleapis.com && \
gcloud services enable containerregistry.googleapis.com && \
echo ok

# Setup docker auth to use Google account
$ gcloud auth configure-docker --project $PROJECT --quiet

Okay! At this point your VM is authenticated with Google and you may want to save a snapshot of it in case you ever need to jump back to this state. This is as easy as doing these commands:

# Get out of VM shell, back to your main system
$ exit

$ vagrant snapshot save freshAndClean2

# If you ever need to restore from snapshot:
# vagrant snapshot restore --no-provision freshAndClean2

Flask Application

The flaskapp.git repo gives us an easy to deploy and use application that we can containerize.

The VM should have all you need in order to try it out before we push it as a docker image to the cloud. That includes the port forwarding, so you can access it from your local browser. Here are the commands you can do, if you are curious:

$ vagrant ssh

# From inside VM

# The app could not be simpler. Here is the bulk of it
$ bat /vagrant/flaskapp.git/src/{app,utils}.py

# Run it from VM
$ cd /vagrant/flaskapp.git/src && \
TARGET=fromVagrantVM FLASK_DEBUG=1 FLASK_APP=app.py \
flask run --host 0.0.0.0 --port 8080

# To stop it, simply <control>+c

From your local browser, open http://localhost:8080/json

Docker was installed as part of the Vagrant provisioning, so you can also try running the app from a container in the VM. The Dockerfile used is as simple as it gets.

By the way, I will get to Knative soon enough! Hang on just a little more :)

$ vagrant ssh

# From inside VM

$ bat /vagrant/flaskapp.git/Dockerfile

# Build a docker image
$ cd /vagrant/flaskapp.git && \
docker build -t flaskapp . && \
docker images

# Start it locally
$ docker run -e "TARGET=flaskappFromDocker" -d --rm -p 8081:5000 \
--name flaskapp flaskapp

$ docker ps

# To stop it, type
$ docker stop flaskapp

From your local browser, open http://localhost:8081/json

Pushing app as a docker image into Google cloud

Let’s tag and push the docker image to a place where the Kubernetes cluster can see it. Since the VM is authenticated with gcloud, let’s just push it there. Later on I will show you how we can use Knative to automatically build the image, but let me not get ahead of myself. :)

# Still inside VM
$ cd /vagrant/flaskapp.git/ && docker build -t foo . 

# gcr.io/${PROJECT}/foo:latest
$ docker tag foo gcr.io/${PROJECT}/foo:latest && \
docker push gcr.io/${PROJECT}/foo:latest

An important caveat here is that the project is part of te image name. Pay attention to that when we start referring to it from the k8 yaml files!

Deploy Kubernetes Cluster (the easy way)

Deploying the cluster is as easy as invoking these commands at this point.

# This takes about 3 minutes... Good time for getting a coffee refill?!?
$ CLUSTER_VERSION='latest' ; \
time gcloud container clusters create $CLUSTER_NAME \
--zone=$CLUSTER_ZONE \
--cluster-version=${CLUSTER_VERSION} \
--machine-type=n1-standard-4 \
--enable-autoscaling --min-nodes=1 --max-nodes=10 \
--enable-autorepair \
--scopes=service-control,service-management,compute-rw,storage-ro,cloud-platform,logging-write,monitoring-write,pubsub,datastore \
--num-nodes=3

# You should now see your newly constructed K8 cluster!
$ gcloud container clusters list

# Store the cluster credentials
$ gcloud container clusters get-credentials ${CLUSTER_NAME} \
--zone ${CLUSTER_ZONE} --project ${PROJECT} ; \
grep --quiet "gcloud container clusters get-credentials" /home/vagrant/.bashrc_me || \
cat << EOT >> /home/vagrant/.bashrc_me
gcloud container clusters get-credentials ${CLUSTER_NAME} --zone ${CLUSTER_ZONE} --project ${PROJECT}
EOT

# Set rbac.authorization
$ kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)

Install Gloo (instead of Istio)

Knative relies on service mesh. In order to fulfill that requirement in this cluster, we can easily and quickly install Gloo. The glooctl application has already been provisioned in he VM, so this is all that is left for us to do at this point:

$ glooctl --version && time glooctl install knative && echo ok

# Looking at the namespaces in the cluster, you can see that these
# two are now created
$ kubectl get pods --namespace gloo-system ; \
kubectl get pods --namespace knative-serving

# Wait for an IP address to be provided to the LoadBalancer service
$ kubectl get services -n gloo-system | grep -i LoadBalancer | \
grep -i pending --quiet && echo 'pending... try again' || echo 'got ip. yay!'

# Once IP is obtained, keep it handy
$ grep --quiet CLUSTERINGRESS_URL /home/vagrant/.bashrc_me || \
echo 'export CLUSTERINGRESS_URL=$(glooctl proxy url \
--name clusteringress-proxy)' >> /home/vagrant/.bashrc_me ;
[ -z "$CLUSTERINGRESS_URL" ] && source /home/vagrant/.bashrc_me ; \
echo "CLUSTERINGRESS_URL: $CLUSTERINGRESS_URL"

Deploying application in K8 Cluster

At this point, we can have the app running in the cluster by using some pre-canned yaml files I created in the k8s folder.

Non-Knative way

Just for comparison sake, you can see below the old-school way for having a deployment in Kubernetes.

$ cd /vagrant/k8s && bat foo.yaml

Deploying the same exact app in Knative way

This little yaml is equivalent to foo.yaml shown above, and more!

$ cd /vagrant/k8s && bat foo-knative-1.yaml

Monitor default namespace

Before deploying anything, it may be useful to create a new shell inside the VM to get an idea of what is running in the K8 cluster. If you agree, try these commands

$ vagrant ssh

$ watch kubectl get pod,service

Launch application with the old-school method

Just for the fun of it, create foo.yaml and interact with it using a temporary ubuntu pod

# From Vagrant VM
$ cd /vagrant/k8s && \
kubectl create -f foo.yaml

# Start ubuntu pod and get inside of it
$ kubectl run -i --tty --rm ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

# From inside ubuntu pod we just started to run
$ apt-get update >/dev/null 2>&1 && apt install --quiet -y \
curl dnsutils >/dev/null 2>&1 && echo ok

# Grab CLUSTER-IP for the service named 'foo-np' and assign it to CLUSTER_IP
$ export PORT=${PORT:-8080} ; \
export CLUSTER_IP=$(dig foo-np.default.svc.cluster.local +short)

$ while : ; do \
curl http://${CLUSTER_IP}:${PORT}/json?delay=750 \
-H 'cache-control: no-cache' ; \
done

# Once you are ready to stop
# <control>+c and then type `exit` to terminate ubuntu pod and get back into VM shell

# If you want, you can terminate the foo application by doing
$ kubectl delete -f foo.yaml

Launch application using Knative service

# From Vagrat VM
$ cd /vagrant/k8s && \
kubectl create -f foo-knative-1.yaml

$ http ${CLUSTERINGRESS_URL}/json?delay=150 \
'Host:foo-example-knative.default.example.com' -v -s fruity

# Make app crash and cause k8 to restart it
$ http ${CLUSTERINGRESS_URL}/json?boom=kaboom \
'Host:foo-example-knative.default.example.com' -v -s fruity

# You can terminate the foo-knative application by doing
$ kubectl delete -f foo-knative-1.yaml

Blue-green demo

tbd…

Build docker image from source code

tbd…

Docker Secrets

tbd…

Work in progress

Pardon the mess… I’m still organizing the contents of this page. This is just a dirty placeholder.

A repo contains the steps and slides used for demonstrating how you can deploy a simple application on Kubernetes via Knative. It starts off showing the app running in the VM, followed by docker in VM, and then deployed on a Kubernetes cluster at the Google Cloud.

Screencast

Knative Demo

Cleanup

$ time gcloud container clusters delete $CLUSTER_NAME --zone $CLUSTER_ZONE

# If you also want to get rid of the project in google account...
$ gcloud projects delete $PROJECT
comments powered by Disqus