• Craig Thomas

Seven Steps to defining the art of the possible in DevOps

Updated: Nov 4

We all love buzzwords, and one over the last couple/few years has been DevOps. What in the world does it mean? I have talked to people that think it means Agile/SCRUM methodology, while others think it is just Docker containers. To some people it is just scripts to manage their network infrastructure and Linux servers, and to others it is a Continuous Integration/Continuous Deployment (CI/CD) pipeline using git repositories. Wikipedia says:

So which one is right?? As we work internally and with clients, I believe the best definition for me is a set of practices, techniques, and tools that make automation a reality. So, that may be Ansible/Chef/Puppet checking and setting configuration servers on infrastructure, Linux, and Windows servers. It is also the software development process. At the end of the day, it is looking at what is possible and putting it into action using the appropriate tools.


So, now we have the age old "tools discussion." It is a holy war. But I would say don't start there. Instead do this:

  1. Whiteboard out exactly what you want to do.

  2. Ask why. A LOT. Use the Five Whys method to get to the root cause of existing problems with your businesses processes

  3. Take an inventory of your current tools, especially ones that already have agents installed or proper permissions

  4. Get and use a source code repository

  5. Start simple and modular, allowing for code/technique reuse

  6. RUTHLESSLY ELIMINATE all manual steps wherever possible

  7. Refactor and look for efficiencies.

  8. Rinse and repeat

So, what are some examples? Reach out to us; we are happy to help you put together some ideas and share best practices. Do not limit yourself. Treat this as an opportunity to show the Art of the Possible. To get you thinking, below are a couple of DevOps projects that we have successfully completed:


EXAMPLE 1 - CI/CD Pipeline for Software Deployment

This one is pretty "standard," but saves a ton of time and leverages several stages/additional pipelines throughout the process. Reach out and we can go into more details, but here are the high level pieces:

  1. Developer submits a PR (GitHub) or Merge Request (GitLab) to the "dev" branch of an Angular/.NET web application.

  2. Run .NET unit tests and report these results back to the GitHub PR

  3. Run Angular unit tests and report these results back to the GitHub PR

  4. Build a Docker Container

  5. Push it to Docker Hub or another container repository tagged with the commit hash

  6. Run an npm audit against the installed npm packages and report these results back to the GitHub PR

  7. Run container vulnerability scanning against the built container and report these results back to the GitHub PR

  8. Analyze the static code and publish the results to Sonarqube tool (i.e. for Quality or Section 508 issues)

The person approving the PR then has relevant data/results to view in addition to just looking at code. If he/she approves the PR, then the following happens:

  1. Download the latest Secrets and ConfigMap (environment variables) and deploy them to Kubernetes

  2. Update the image of the running pod in the DEV namespace of Kubernetes with the newly built image/commit hash

  3. Run Cucumber tests against DEV for basic smoke tests and other test cases

  4. Publish the Cucumber report to the pipeline

Now the app is up and running in DEV with nothing being done manually outside of the normal PR approval process. Developers and decision makers see more data to make more informed decisions. This approach lowers costs by eliminating manual labor, improves software quality, and ensure security vulnerabilities do not escape to production. This pipeline then continues all the way through to Production and releases for customers.


EXAMPLE 2 - Extending This Pipeline

So, how can we take this even further? Our software can run in a Docker container, but it also can be deployed using a standalone virtual appliance. We leverage the above pipelines to assist with this as well:

  1. A release tag is created in GitHub

  2. The release pushes the production container to Docker Hub for customers to deploy/update

  3. This process also creates a release in our Appliance pipeline

  4. This pipeline gets the release version as an input variable

  5. It updates the necessary files in its git repository

  6. It spins up a custom Linux box to do the build running in Azure/AWS/wherever

  7. It builds the appliance, creating an ISO

  8. It automatically uploads this ISO to an Azure Blob which is referenced from a URL or website

  9. It shuts down the Linux box to save compute costs within Azure/AWS

All this occurs once again from a single action of an authorized individual: creating a release in GitHub. Everything is 100% automated with the only thing required is a simple governance process to approve the release.


I hope this gave you a couple ideas of how DevOps can benefit you. In a future post, I will dive into another example that is more of an infrastructure focus. The purpose of DevOps is putting automation into action. Ruthlessly eliminate every manual step possible. Reach out to us. Or better yet, schedule a free initial consultation with me (Craig Thomas) here. We would love to partner with you as you put these techniques and others into practice to eliminate manual steps and focus on the more important areas of your work.

Copyright © 2020 by C2 Labs, Inc.

All Rights Reserved