The second seminar in our Scalyr Insights Series, “Kubernetes: Impact on DevOps”, was presented by our CTO and co-founder, Steven Czerwinski. Our resident Community Guy Dave McAllister was working behind the scenes (say “Hello!” to him when you can). Before co-founding Scalyr, Steven extensively used the forerunners to Kubernetes at Google: Global Work Queue and Borg.
A big “Thank you” to everyone who signed up for the seminar (all 600+ of you to be precise). We’re humbled by the overwhelming interest and will continue producing more of these resources.
Here are the three takeaways from yesterday’s seminar:
Takeaway #1: Kubernetes handles multiple deployment strategies, including blue/green and canary
The answer to almost all of life’s questions is “It depends” (and “42”), and “It depends” certainly applies to figuring out what’s the best deployment strategy for your needs when using Kubernetes. Fortunately, Kubernetes already comes equipped to handle different deployments with little to no additional tweaking. It’s flexible enough for you to start with a smaller strategy (canary) or go full-bore (blue/green, shadow) when you need to.
Takeaway #2: Kubernetes allows you to treat your production system as Infrastructure as Code (IaC)
Kubernetes codifies a lot of strong DevOps practices and enables the operational side of DevOps to act more as facilitators instead of gatekeepers (who has time to keep tabs on tool use nowadays?). In Kubernetes, your production system is captured in a set of config files, or manifest files. These represent how your system is running in production: you can add comments to the manifest files so others can get a heads up, you can check the files in Git, and also watch changes to your production environment. The beauty of all of this is that you don’t need to waste time piecing together tribal knowledge – anyone can jump in and quickly understand how the production system gets together and works together.
Takeaway #3: With Kubernetes coolness comes increased complexity, which means you’ll still need the ability to both monitor and dive into logs for root cause analysis
One of the best things about Kubernetes is that it’s highly scalable: you can scale up to hundreds of pods in your production environment without worrying about performance. But, that’s also one of its downsides in that you need effective monitoring and logging. Both are complementary and ensure that logs at every layer of your architecture and application are captured so that you can troubleshoot, analyze and diagnose issues before they escalate.