My colleagues and I recently participated in a hackathon implementing Kubernetes over Azure Container Service engine.
One of the goals was to deliver advanced deployment strategies using built-in kubernetes capabilities and the new VSTS/TFS release management tools.
The result is an orchestrated automated, repeatable and safe release process which I’d like to describe over the course of the coming post-series.
The output of the series, a working VSTS\TFS pipeline, will be later released into GitHub where I hope it could be adopted and maybe even extended to include other platforms or deployment methods.
We’ll implement a strategy called blue/green deployment, which is considered to be the safest deployment method for production workloads today and is used by many organizations.
In fact, many of the known cloud-native platforms implement this pattern internally to update services.
If this is the first time you’ve heard this term, I encourage you to follow or links to read more about the process and comparing it to other valid (or less so) strategies.
We won’t cover the basics of kubernetes, the industry standard, open source, container orchestration platform, and you should be at least familiar with its fundamental concepts.
You can deploy kubernetes to Azure using either:
- Azure Container Services for Kubernetes (Managed Kubernetes), see .
- Azure Container Service (which I have when working on this tutorial). It allows using any of the three known orchestrators (kubernetes, DC/OS, and swarm).
- Azure Container Service Engine which allows for advanced scenarios such as deploying the cluster into your custom virtual network, use windows containers, GPUs and so on. More about ACS-engine in deep-dive series.
The Continuous Deployment Pipeline
A high-level look at the process, we will:
- Run a “current” release of an application (nginx 1.10).
- Deploy the next release of the application(“v-next”, nginx 1.13) to the cluster.
- Expose “v-next” release for automated and manual testing.
- Switch production traffic only to “v-next” release.
- Delete old “current” release.
Those translate to the following environment snapshots:
- Blue – A single, current release. All production traffic routes to the current version.
- Blue/Green – “v-next” release is deployed side by side with the “current”. All production traffic routes to the current release. Test traffic routes to v-next release.
- Green – “v-next” release is deployed side by side with the current. All production traffic routes to the “v-next” release.
- Back to blue – Delete the previous “current” release, “v-next” becomes the new “current”.
Interesting to note that:
- We’re naming the whole environment state “blue” or “green” and not the actual release-versions (the deployments). That is an implementation consideration and does not contrasts other implementations you may encounter online. It will make more sense as we’ll dive deeper and explore the pipelines that transition the environment between states.
- The process starts and ends in a blue state, which gives it a cyclic form. That makes the process repeatable and with an expectable result after each execution, with an end-result of our application being deployed only once to the cluster.
- When a new release pipeline is triggered, the process expects to find a “blue” environment every time. Otherwise, it will fail when started.
- In that state, the infrastructure is released and de-provisioned from any previous releases and no other processes are running in the background.
- While blue state indicates that resources acquired by previous deployments are released, it also marks the end of your ability to roll back quickly or maintain versioned APIs.
Translating that to VSTS’s new pipeline editor results in the diagram below where three environments are defined, each named by the state it generates.
- At the beginning of a Blue-Green environment definition run, the cluster is in a Blue state and by the end of it in a Blue-Green state.
- By the end of a Blue environment definition run, our environment is back to a Blue state where the application is deployed only once.
Using three environment definitions instead of defining a single, longer one partitioned into agent phases has the following benefits:
- This method allows to re-deploy a specific environment if a process failed. A single, longer environment definition would have to go through the entire delivery process.
- When applicable, environments can execute in parallel.
- Using built-in pre and post conditions to implement manual interventions phases.
- The conceptual definition of three environment-instances (states) implemented on shared infrastructure resources, and the set of transformation declarations (tasks) between them.
- Variables can be assigned different values in different environments. That enables a certain level of abstraction when scripting tasks, having VSTS inject the variables as environment variables to the agent at runtime. See the variable definitions and their environment specific values:
Next, we’ll create our process input and the cluster’s initial state.